LocalAISlackBot is a local AI powered Slack bot built with Python. It integrates local LLMs via Ollama/LM Studio/Cloud, web search (Serper or SearXNG), vision, file analysis, image generation, music generation, and Python execution directlyinside your Slack workspace.
What can this local AI do in Slack?
- 💬 Chat: Natural conversation with context awareness.
- 🧠 On-Demand RAG Memory: A specialized tool that allows users to explicitly save information to a channel-specific vector database. The bot only accesses this memory for the specific channel where it was saved, ensuring privacy and context relevance.
- 📄 File Analysis: Upload PDF, TXT, or DOCX files, and the bot will read and answer questions based on them.
- 👁️ Vision: Upload images and ask the bot to describe or analyze them (requires a Vision model).
- 🕒 Local Time and Date Retrieve current system time and date.
- 🌐 Web Search: The bot can search the internet using either Serper API (Cloud) or SearXNG (Local/Self-hosted).
- 🎨 Image Generation: The bot can generate images locally using ComfyUI (requires installed ComfyUI).
- 🎵 Music Generation: Create original audio and music (requires separate Music Gen Python program running).
- 🐍 Python Execution: Write and execute Python code snippets (requires separate Python Execution environment/program running).
- Python 3.11.9
- Slack Bolt
- Ollama
- LangChain
- FastAPI for optional services
- ComfyUI for image generation
1 Create the Slack App
1.Go to the Slack API Dashboard. 2. Click Create New App 3. Choose From Scratch 4. Enter your app name (example: AI-Bot) 5. Select your workspace
2 Configure Bot Token Scopes
Go to:
OAuth & Permissions → Bot Token Scopes
Add the following scopes:
| Scope | Description |
|---|---|
app_mentions:read |
View messages that mention @AI-Bot |
assistant:write |
Allow AI-Bot to act as an App Agent |
channels:history |
View messages in public channels |
chat:write |
Send messages as AI-Bot |
commands |
Enable slash commands |
files:read |
View files shared in conversations |
files:write |
Upload and manage files |
groups:history |
View messages in private channels |
im:history |
View direct messages |
users:read |
View users in workspace |
After adding scopes:
- Click Install to Workspace
- Authorize the app
Copy your:
- Bot User OAuth Token (xoxb-...)
You will need this in your configuration panel.
3 Enable Slash Command Go to: Slash Commands → Create New Command
Configure: Name:
/clear_memory
Description:
Clears the conversation with the user that uses this command.
Save the command.
4 Enable Socket Mode Go to: Settings → Socket Mode Turn ON:
Enable Socket Mode
Then: Click Generate App-Level Token Add scope:
connections:write
Copy your:
- App Level Token (xapp-...) This token is required for WebSocket connection.
5️ Event Subscriptions
Go to: Event Subscriptions Enable Events. Subscribe to the following bot events:
app_mention
message.channels
message.groups
message.im
Save changes.
6️ Configure Tokens in Web Interface After generating both tokens: You must enter them inside the bot’s web configuration interface:
- Bot Token → xoxb-...
- App Token → xapp-... Without both tokens, the bot cannot connect.
7️ Install Dependencies Inside your project directory:
pip install -r requirements.txt
8️ Run the Bot
python main.py
Your bot can connect to:
- Ollama
- LM Studio
- Any OpenAI compatible local server running on your network You can even run the model on another PC inside the same router network.
- Download and install Ollama.
- Start the server in the terminal:
ollama serve
- Default API endpoint:
http://localhost:11434
If running on another machine in your local network:
http://192.168.X.X:11434
Replace 192.168.X.X with the IP of the PC running Ollama.
Download a Model
Example recommended lightweight model in the terminal:
ollama pull qwen3:4b
You can also use any other supported model. List installed models:
ollama list
Copy the exact model name into your bot configuration.
- Download LM Studio.
- Download a model inside LM Studio Example:
- Qwen 3 4B Instruct
- Start the Local Server inside LM Studio:
- Go to Developer tab
- Enable OpenAI Compatible API
- Note the port (usually 1234)
Default endpoint:
http://localhost:1234/v1
Place this endpoint and model name inside your configuration.
You can choose between a Cloud-based API (Serper) or a Local, privacy-focused engine (SearXNG).
Option A: Serper API (Cloud)
- Get an API key from Serper.dev.
- In the bot's Web Configuration:
- Set SEARCH_PROVIDER to: serper
- Paste your Serper API Key into the designated field.
Option B: SearXNG (Local / Privacy) SearXNG is a free, self-hosted metasearch engine that aggregates results from multiple sources without tracking you.
-
Install Docker (If not already installed) Before running SearXNG, you must have Docker Desktop or Docker Engine installed: Windows/Linux.
-
Download and Run SearXNG via Terminal Open your terminal (Command Prompt, PowerShell, or Bash) and run the following commands:
Step A: Pull the latest image
docker pull searxng/searxngStep B: Run the container Note: ${PWD} represents your current directory.
- If using PowerShell or Bash: use "${PWD}/searxng"
- If using Windows Command Prompt (CMD): replace ${PWD} with %cd%
docker run -d \
-p 8080:8080 \
-v "${PWD}/searxng:/etc/searxng" \
--name searxng_container \
searxng/searxng
- Locate and Edit settings.yml The command in Step B creates a folder named 'searxng' inside the directory where you ran the command.
- Go to the folder: [Your Project Path]/searxng/
- Open 'settings.yml' with a text editor (Notepad, VS Code, etc.).
Note: If the file does not exist or is empty, create/edit it to include the settings below.
- Important Configuration (Enable JSON) For the AI to read search results, the "json" format must be enabled. Inside 'settings.yml', ensure the following lines exist (add them if they are missing):
use_default_settings: true
search:
formats:
- html
- json
- Bot Configuration In the bot's Web Configuration:
- Set SEARCH_PROVIDER to: searxng
- Set SEARXNG_HOST to: http://localhost:8080 (or your server's local IP)
The bot supports local image generation through ComfyUI. You can download and use any template inside ComfyUI, but it is strongly recommended to use the tested workflow included with this project. Recommended Setup (Tested and Working) Inside the project folder:
LocalAISlackBot/comfy resources/lumina-2-text2img-comfyui-wiki.com.json
You will find a .json workflow file. Setup Steps
- Install ComfyUI.
- Start ComfyUI.
- Drag and drop the provided .json workflow file into the ComfyUI interface.
- ComfyUI will automatically show which models are missing.
- Download the required models directly inside ComfyUI.
⚙ Important Settings in ComfyUI Inside ComfyUI:
- Open Settings
- Enable Dev Mode
- Check:
- Host
- Port
Default example:
http://localhost:8188
If running on another PC in your local network:
http://192.168.X.X:8188
You must place the exact host and port in the bot configuration page. Without Dev Mode enabled, API access will not work.
Repository: https://github.com/RafiBG/AIMusicGenerator
Role:
- Separate Python API
- Receives prompt
- Generates audio file
- Returns .wav to Slack
Runs independently from the main bot.
Repository: https://github.com/RafiBG/AIPythonRun
Role: Sandboxed Python execution API
- AI writes code
- Code is executed safely
- Returns: ** stdout ** stderr ** execution result
🎮 Slash Commands In Slack Channels (Group Chat)
| Commands | Description |
|---|---|
/clear_memory |
Clears conversation memory for the entire channel |
🎮 Commands In Direct Messages (Private Chat)
| Commands | Description |
|---|---|
!forget |
Clears AI memory for your private conversation |
!help |
Shows usage instructions |
Web Interface (Adaptive OS Light/Dark Mode)
Chat & Vision Capabilities
File Analysis & Web Search
Image Generation & Music Generation
Python Code Execution
RAG/Vector Memory












