Instructions for AI coding assistants working on this codebase.
A Python framework for deploying AI agents as Slack bots. One Docker image per agent — each agent has its own Slack app identity, system prompt, LLM provider, and tool servers. No LangChain/LlamaIndex; the agent loop is a custom ~200-line async generator.
# Create and activate venv (one-time setup)
python3 -m venv .venv
source .venv/bin/activate
# Install for development
pip install -e ".[dev]"
# Run an agent locally
slack-agents run agents/<agent-dir>
# Check agent health (requires persistent storage)
slack-agents healthcheck agents/<agent-dir>
# Export conversations to HTML
slack-agents export-conversations agents/<agent-dir> --format=html
# Build Docker image for an agent
slack-agents build-docker agents/<agent-dir>
# Build and push to a registry
slack-agents build-docker agents/<agent-dir> --push registry.example.com
# Tests (asyncio_mode=auto, no flags needed)
pytest
pytest tests/test_format.py # single file
pytest tests/test_format.py::test_name # single test
# Lint and format
ruff check --fix src/ tests/
ruff format src/ tests/All commands assume the .venv virtualenv is active.
Pre-commit hooks run ruff check+format automatically on commit.
Plugin system: All pluggable concerns (LLM, storage, tools) follow the same pattern: a type field with a dotted import path, and a Provider class in that module. load_plugin(type_path, **kwargs) loads any plugin.
Startup: main.py -> load_agent_config() returns (config, system_prompt, agent_name) -> SlackAgent() -> connects storage/tools/Slack Socket Mode.
Per-message flow: Slack event -> agent.py._handle_message() -> load conversation history via ConversationManager -> extract file attachments -> run_agent_loop_streaming() async generator -> StreamingFormatter routes text to SlackStreamer and tables to native TableBlock messages -> tool calls shown as Slack attachments -> usage footer posted -> response persisted.
Key modules:
agent_loop.py-- Core LLM->tools->LLM loop (max 15 iterations, parallel tool execution viaasyncio.gather), definesToolProviderprotocolllm/base.py--BaseLLMProviderABC,StreamEventdataclass, internal Anthropic-style message formatllm/anthropic.py,llm/openai.py-- Provider implementations (OpenAI provider converts at its boundary)tools/base.py--BaseToolProviderandBaseFileImporterProviderABCs withallowed_functionsregex filteringtools/mcp_http.py-- MCP over HTTP/SSE tool providertools/file_exporter.py-- Built-in document generation tool (PDF, DOCX, XLSX, CSV, PPTX)tools/file_importer.py-- Built-in file import provider (PDF, DOCX, XLSX, PPTX, text, images)storage/base.py--BaseStorageProviderABC (generic persistence layer)storage/sqlite.py-- SQLite storage provider (in-memory or file-based, via aiosqlite)storage/postgres.py-- PostgreSQL storage provider (asyncpg)slack/agent.py--SlackAgentwith Bolt AsyncApp, event routing, cost trackingslack/conversations.py--ConversationManagerwrapping storage with conversation logicslack/streaming.py+streaming_formatter.py-- Streaming output with table detection
Internal message format is Anthropic-style throughout (content as list of typed blocks: text, tool_use, tool_result). The OpenAI provider converts at its boundary via _convert_messages() and _convert_tools().
Each agent lives in agents/<name>/ with config.yaml and system_prompt.txt. The agent name is derived from the directory name. Config supports {ENV_VAR} interpolation (uppercase + underscore patterns only).
Top-level config fields:
version(required) -- user-controlled string shown in the usage footer and used as the Docker image tag when building withslack-agents build-docker. Track changes to the agent's prompts, tools, or behavior. The framework does not interpret this — it can be semver, a date, or any string.schema(required) -- config format identifier, currently"slack-agents/v1". The framework checks this to ensure it can parse the config. Newer schemas fail with a clear upgrade message.
- Async everywhere -- all I/O (Slack, LLM, tools, storage) is async
- Streaming as async generator --
run_agent_loop_streaming()yieldsStreamEvent(text) anddict(tool status) - Unified tool interface --
BaseToolProviderABC (.tools+.call_tool()) for LLM-facing tools;BaseFileImporterProviderABC (.handlers) for file import — both configured intools:section, separated by isinstance in_init_tools() - Explicit configuration over silent defaults -- do not auto-load providers when none are configured; if no
BaseFileImporterProvideris in the config, file attachments are rejected with a clear error - Generic storage --
BaseStorageProviderknows nothing about conversations;ConversationManageradds conversation logic on top - Lazy initialization -- SlackStreamer creates stream on first delta; tools connect only when initialized
- Caching-aware cost tracking -- each LLM provider has
estimate_cost()with provider-specific cache multipliers - Tool definitions in Anthropic format --
{"name", "description", "input_schema"}is the canonical format everywhere - 1 replica per agent -- Socket Mode requires exactly one WebSocket connection per app
The project includes AI-agent-friendly documentation following the llms.txt convention:
llms.txt(repo root) -- concise index pointing to docs and llms-full.txtllms-full.txt(repo root) -- generated from docs viapython3 src/slack_agents/scripts/generate_llms_full.pyllms-full.txtis bundled in the PyPI wheel viaforce-includein pyproject.toml
When modifying docs: re-run python3 src/slack_agents/scripts/generate_llms_full.py and commit the result.
CHANGELOG.md is the gate. Entries accumulate under ## [Unreleased] as PRs land; releasing just renames that heading to the new version. If prior releases skipped this step (as 0.6.3 did), backfill those versions first so the changelog is honest about history.
- CHANGELOG.md — rename
## [Unreleased]to## [X.Y.Z] - YYYY-MM-DDwith today's date, and add a fresh empty## [Unreleased]above it. - pyproject.toml — bump
versiontoX.Y.Z. - llms-full.txt — regenerate:
python3 src/slack_agents/scripts/generate_llms_full.py. - Commit and push to
main. - Create a GitHub Release (which creates a git tag).
- The
publish.ymlworkflow automatically builds and publishes to PyPI via trusted publishing.
The PyPI deployment requires manual approval in the GitHub Actions UI. Do NOT publish to PyPI manually — the GitHub Release trigger handles it.
See CONTRIBUTING.md for full coding and commit conventions.
- Python 3.12+, line length 100
- Ruff rules: E, F, I (errors, pyflakes, isort)
- Keep it simple. Minimal abstractions, no unnecessary indirection.
- Commit messages: Conventional Commits —
feat:,fix:,docs:,chore:,test:,refactor:. Lowercase, imperative, under 72 chars. - NEVER run
git commitorgit pushwithout explicit user approval. Always propose the commit message and file list, then STOP and wait for the user to say "go", "commit", "yes", or similar. This is non-negotiable — even if the user says "prepare a release" or "let's commit", you must present the plan and wait. "Prepare" ≠ "execute".