Skip to content

RafiBG/LocalAISlackBot

Repository files navigation

LocalAISlackBot

Python Slack

LocalAISlackBot is a local AI powered Slack bot built with Python. It integrates local LLMs via Ollama/LM Studio/Cloud, web search (Serper or SearXNG), vision, file analysis, image generation, music generation, and Python execution directlyinside your Slack workspace.

✨ Features

What can this local AI do in Slack?

  • 💬 Chat: Natural conversation with context awareness.
  • 🧠 On-Demand RAG Memory: A specialized tool that allows users to explicitly save information to a channel-specific vector database. The bot only accesses this memory for the specific channel where it was saved, ensuring privacy and context relevance.
  • 📄 File Analysis: Upload PDF, TXT, or DOCX files, and the bot will read and answer questions based on them.
  • 👁️ Vision: Upload images and ask the bot to describe or analyze them (requires a Vision model).
  • 🕒 Local Time and Date Retrieve current system time and date.
  • 🌐 Web Search: The bot can search the internet using either Serper API (Cloud) or SearXNG (Local/Self-hosted).
  • 🎨 Image Generation: The bot can generate images locally using ComfyUI (requires installed ComfyUI).
  • 🎵 Music Generation: Create original audio and music (requires separate Music Gen Python program running).
  • 🐍 Python Execution: Write and execute Python code snippets (requires separate Python Execution environment/program running).

🛠️ Tech Stack & Libraries

  • Python 3.11.9
  • Slack Bolt
  • Ollama
  • LangChain
  • FastAPI for optional services
  • ComfyUI for image generation

⚙️ Installation & Setup

1 Create the Slack App

1.Go to the Slack API Dashboard. 2. Click Create New App 3. Choose From Scratch 4. Enter your app name (example: AI-Bot) 5. Select your workspace

2 Configure Bot Token Scopes

Go to:

OAuth & Permissions → Bot Token Scopes

Add the following scopes:

Scope Description
app_mentions:read View messages that mention @AI-Bot
assistant:write Allow AI-Bot to act as an App Agent
channels:history View messages in public channels
chat:write Send messages as AI-Bot
commands Enable slash commands
files:read View files shared in conversations
files:write Upload and manage files
groups:history View messages in private channels
im:history View direct messages
users:read View users in workspace

After adding scopes:

  • Click Install to Workspace
  • Authorize the app

Copy your:

  • Bot User OAuth Token (xoxb-...)

You will need this in your configuration panel.

3 Enable Slash Command Go to: Slash Commands → Create New Command

Configure: Name:

/clear_memory

Description:

Clears the conversation with the user that uses this command.

Save the command.

4 Enable Socket Mode Go to: Settings → Socket Mode Turn ON:

Enable Socket Mode

Then: Click Generate App-Level Token Add scope:

connections:write

Copy your:

  • App Level Token (xapp-...) This token is required for WebSocket connection.

5️ Event Subscriptions

Go to: Event Subscriptions Enable Events. Subscribe to the following bot events:

app_mention
message.channels
message.groups
message.im

Save changes.

6️ Configure Tokens in Web Interface After generating both tokens: You must enter them inside the bot’s web configuration interface:

  • Bot Token → xoxb-...
  • App Token → xapp-... Without both tokens, the bot cannot connect.

7️ Install Dependencies Inside your project directory:

pip install -r requirements.txt

8️ Run the Bot

python main.py

2. Local Model Setup - Ollama / LM Studio (The Brain)

Your bot can connect to:

  • Ollama
  • LM Studio
  • Any OpenAI compatible local server running on your network You can even run the model on another PC inside the same router network.

Option A - Ollama Setup

  1. Download and install Ollama.
  2. Start the server in the terminal:
ollama serve
  1. Default API endpoint:
http://localhost:11434

If running on another machine in your local network:

http://192.168.X.X:11434

Replace 192.168.X.X with the IP of the PC running Ollama.

Download a Model

Example recommended lightweight model in the terminal:

ollama pull qwen3:4b

You can also use any other supported model. List installed models:

ollama list

Copy the exact model name into your bot configuration.

Option B - LM Studio Setup

  1. Download LM Studio.
  2. Download a model inside LM Studio Example:
  • Qwen 3 4B Instruct
  1. Start the Local Server inside LM Studio:
  • Go to Developer tab
  • Enable OpenAI Compatible API
  • Note the port (usually 1234)

Default endpoint:

http://localhost:1234/v1

Place this endpoint and model name inside your configuration.

🌐 3. Web Search Setup (Options)

You can choose between a Cloud-based API (Serper) or a Local, privacy-focused engine (SearXNG).

Option A: Serper API (Cloud)

  1. Get an API key from Serper.dev.
  2. In the bot's Web Configuration:
    • Set SEARCH_PROVIDER to: serper
    • Paste your Serper API Key into the designated field.

Option B: SearXNG (Local / Privacy) SearXNG is a free, self-hosted metasearch engine that aggregates results from multiple sources without tracking you.

  1. Install Docker (If not already installed) Before running SearXNG, you must have Docker Desktop or Docker Engine installed: Windows/Linux.

  2. Download and Run SearXNG via Terminal Open your terminal (Command Prompt, PowerShell, or Bash) and run the following commands:

    Step A: Pull the latest image

    docker pull searxng/searxng
    

    Step B: Run the container Note: ${PWD} represents your current directory.

  • If using PowerShell or Bash: use "${PWD}/searxng"
  • If using Windows Command Prompt (CMD): replace ${PWD} with %cd%
docker run -d \
  -p 8080:8080 \
  -v "${PWD}/searxng:/etc/searxng" \
  --name searxng_container \
  searxng/searxng
  1. Locate and Edit settings.yml The command in Step B creates a folder named 'searxng' inside the directory where you ran the command.
  • Go to the folder: [Your Project Path]/searxng/
  • Open 'settings.yml' with a text editor (Notepad, VS Code, etc.).

Note: If the file does not exist or is empty, create/edit it to include the settings below.

  1. Important Configuration (Enable JSON) For the AI to read search results, the "json" format must be enabled. Inside 'settings.yml', ensure the following lines exist (add them if they are missing):

use_default_settings: true
search:
formats:
- html
- json

  1. Bot Configuration In the bot's Web Configuration:

⚠️ CRITICAL: The Docker SearXNG container MUST be running in the background for the AI to access the web. If the container is stopped, the bot will return an error when attempting to perform a web search.

🎨 4. ComfyUI - Image Generation (Optional)

The bot supports local image generation through ComfyUI. You can download and use any template inside ComfyUI, but it is strongly recommended to use the tested workflow included with this project. Recommended Setup (Tested and Working) Inside the project folder:

LocalAISlackBot/comfy resources/lumina-2-text2img-comfyui-wiki.com.json

You will find a .json workflow file. Setup Steps

  1. Install ComfyUI.
  2. Start ComfyUI.
  3. Drag and drop the provided .json workflow file into the ComfyUI interface.
  4. ComfyUI will automatically show which models are missing.
  5. Download the required models directly inside ComfyUI.

⚙ Important Settings in ComfyUI Inside ComfyUI:

  1. Open Settings
  2. Enable Dev Mode
  3. Check:
  • Host
  • Port

Default example:

http://localhost:8188

If running on another PC in your local network:

http://192.168.X.X:8188

You must place the exact host and port in the bot configuration page. Without Dev Mode enabled, API access will not work.

🎵 5. Music Generation - Optional

Repository: https://github.com/RafiBG/AIMusicGenerator

Role:

  • Separate Python API
  • Receives prompt
  • Generates audio file
  • Returns .wav to Slack

Runs independently from the main bot.

🐍 6. Python Execution - Optional

Repository: https://github.com/RafiBG/AIPythonRun

Role: Sandboxed Python execution API

  • AI writes code
  • Code is executed safely
  • Returns: ** stdout ** stderr ** execution result

🎮 Slash Commands In Slack Channels (Group Chat)

Commands Description
/clear_memory Clears conversation memory for the entire channel

🎮 Commands In Direct Messages (Private Chat)

Commands Description
!forget Clears AI memory for your private conversation
!help Shows usage instructions

📸 Gallery & Examples

Web Interface (Adaptive OS Light/Dark Mode)

Chat & Vision Capabilities

File Analysis & Web Search

Image Generation & Music Generation

Python Code Execution

RAG/Vector Memory

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors