Agentron is a local-first, open-source platform for AI agent orchestration and workflow automation. With heap mode (Level 4 — recursive/self-building), the planner assembles a DAG of agents per request and can create new tools and agents on the fly; each turn uses a model-chosen specialist graph. Without heap, chat uses Level 1 (ReAct + tools). Workflows use Level 2 (static multi-agent). You run everything on your own infrastructure: no cloud lock-in, full data privacy, optional desktop app.
Table of contents
Agentron lets you design, run, and manage AI agents and multi-agent workflows locally. Its differentiator is heap mode (Level 4): the planner chooses which specialists (e.g. workflow, agent) and in what order or parallelism per request and can create new tools and agents on the fly; the runtime builds and runs that DAG. When heap is off, chat uses Level 1 (ReAct + tools). Workflows use Level 2 (static multi-agent). Ideal for teams that want cutting-edge agent orchestration with production safety (cost control, loop limits, provider-agnostic LLMs) and privacy-first automation without cloud-only platforms.
- Heap (Level 4 — recursive/self-building). Planner selects specialists and order per request; runtime builds and runs the DAG; can create new tools and agents on the fly. Fallback: Level 1 (one ReAct-style agent) when heap is off; max steps and loop limits throughout.
- Local-first and self-hosted. SQLite storage, optional Electron desktop app; run on-premise or air-gapped.
- Visual agent builder. Node-based graphs (LLM, tools, decision nodes) and code agents (JavaScript, Python, TypeScript) in sandboxes.
- Multi-agent workflows (Level 2). Human-designed graphs and configurable rounds; the chat assistant creates and edits agents, workflows, and tools via natural language.
- Tools and integrations. Native, HTTP, and MCP tools; RAG and knowledge connectors (Notion, Google Drive, Dropbox, OneDrive, Confluence, GitBook, local folders); Podman sandboxes; OpenAI, Anthropic, Ollama, and remote LLM support; OpenClaw gateway integration.
ReAct (Reasoning + Acting) is the pattern used by most production assistants: one LLM, one context, a fixed set of tools; the model loops over thought, choose tool, act, observe. Used by ChatGPT (function calling), Claude (tools), and the OpenAI Assistants API: single orchestrator, fixed tool set, dynamic tool selection only.
Why Agentron is cutting edge: With heap on, Agentron runs at Level 4 (recursive/self-building): the planner assembles a DAG of specialists per request and can create new tools and agents on the fly. Specialists are real roles in the app (e.g. workflow, agent); the model picks which run and in what order each turn. Without heap, chat uses Level 1 (ReAct + tools). Workflows use Level 2 (static topology, human-designed graphs). Max steps, loop detection, and tool-call budgets are enforced. For the full taxonomy (Levels 1–4), see Agent architectures (comparison) in the docs.
Agentron uses event-driven patterns under the hood for execution and delivery: workflow runs are driven by a DB-backed event queue (RunStarted, NodeRequested, NodeCompleted, UserResponded) with persisted run state for pause/resume and user-in-the-loop; chat turns can be consumed via a pub/sub event channel (SSE) so clients subscribe by turnId and receive the same stream as streaming POST; and workflow execution is queued (DB-backed job queue with bounded concurrency) so start/resume/scheduled runs are serialized and observable. For details, see Event-driven architecture in the docs.
- Node.js version in .nvmrc (e.g. 22.x)
- npm or pnpm
- Clone the repo and enter the project:
git clone <repo-url> && cd agentron - Install dependencies:
npm run install:uiorpnpm install(UI only) ornpm install(full, including desktop) - Run the app:
npm run dev:uiorpnpm run dev:ui, then open http://localhost:3000
Desktop app: Install Agentron as a standalone Electron app (no Node.js required). Download installers from the Download page or GitHub Releases. The app starts the UI and stores data locally.
Full steps, troubleshooting, and desktop build: INSTALL.md.
After starting the app, open Chat and try: "What tools do I have?" or "Create a simple agent that says hello." The assistant uses tools to create and edit agents, workflows, and tools. See Quick start in the docs for more prompts.
| Path | Description |
|---|---|
packages/ui |
Next.js UI |
packages/runtime |
Local runtime (agent execution) |
packages/core |
Shared types and utilities |
apps/desktop |
Electron wrapper |
installers |
Local LLM installer scripts |
To avoid pulling the Electron toolchain during UI work:
npm run install:ui
npm run dev:uiWhen you need desktop packaging:
npm run install:desktopUse the same checks as CI before pushing:
- Node: Use the version in
.nvmrc(e.g.nvm useorfnm use). - pnpm: Repo pins
packageManagerinpackage.json; with Corepack enabled (corepack enable) you get the same pnpm version as CI. - Install: Run
pnpm install --frozen-lockfile(or at leastpnpm install) so dependencies match the lockfile. - Run CI checks:
pnpm run ci:localruns format:check, typecheck, lint, test:coverage, file-lengths, build:docs, plus build:ui and desktop dist.
From repo root:
npm run test:e2e-llmThe script starts Ollama if needed and pulls the default E2E model if missing. Prerequisites: Ollama installed. Optional: Podman for run-code and container scenarios. Default model: Qwen 3 8B (qwen3:8b). Override with E2E_LLM_MODEL (e.g. E2E_LLM_MODEL=qwen2.5:3b npm run test:e2e-llm). These tests are not run in CI.
| Model | Env | Notes |
|---|---|---|
| Qwen 3 8B (default) | E2E_LLM_MODEL=qwen3:8b |
Better for heap e2e |
| Qwen 3 14B | E2E_LLM_MODEL=qwen3:14b |
Larger, higher quality |
| Qwen 2.5 3B | E2E_LLM_MODEL=qwen2.5:3b |
Faster, smaller |
| Llama 3.2 | E2E_LLM_MODEL=llama3.2 |
Meta model; script auto-pulls if missing |
| Phi-3 | E2E_LLM_MODEL=phi3 |
Microsoft small, fast; script auto-pulls if missing |
Optional env: OLLAMA_BASE_URL (default http://localhost:11434), E2E_SAVE_ARTIFACTS=1, E2E_LOG_DIR.
Optional dependencies: To build the UI or run tests with coverage, optional deps must be installed. Set optional=true in .npmrc or run npm install --include=optional. CI uses npm install --include=optional.
- Documentation site (concepts, quick start, tutorials, capabilities)
- Agent architectures (comparison) (Level 1–4 taxonomy; where Agentron fits)
- INSTALL.md (install, troubleshoot, desktop build)
- Download (desktop installers)
See LICENSE.