Skip to content

cap-jmk-real/agentron

Repository files navigation

Agentron: Cutting-Edge Local AI Agent Orchestration & Automation

CI coverage lines of code TypeScript Next.js Local-first Docs

Agentron is a local-first, open-source platform for AI agent orchestration and workflow automation. With heap mode (Level 4 — recursive/self-building), the planner assembles a DAG of agents per request and can create new tools and agents on the fly; each turn uses a model-chosen specialist graph. Without heap, chat uses Level 1 (ReAct + tools). Workflows use Level 2 (static multi-agent). You run everything on your own infrastructure: no cloud lock-in, full data privacy, optional desktop app.

Table of contents

About

Agentron lets you design, run, and manage AI agents and multi-agent workflows locally. Its differentiator is heap mode (Level 4): the planner chooses which specialists (e.g. workflow, agent) and in what order or parallelism per request and can create new tools and agents on the fly; the runtime builds and runs that DAG. When heap is off, chat uses Level 1 (ReAct + tools). Workflows use Level 2 (static multi-agent). Ideal for teams that want cutting-edge agent orchestration with production safety (cost control, loop limits, provider-agnostic LLMs) and privacy-first automation without cloud-only platforms.

Features

  • Heap (Level 4 — recursive/self-building). Planner selects specialists and order per request; runtime builds and runs the DAG; can create new tools and agents on the fly. Fallback: Level 1 (one ReAct-style agent) when heap is off; max steps and loop limits throughout.
  • Local-first and self-hosted. SQLite storage, optional Electron desktop app; run on-premise or air-gapped.
  • Visual agent builder. Node-based graphs (LLM, tools, decision nodes) and code agents (JavaScript, Python, TypeScript) in sandboxes.
  • Multi-agent workflows (Level 2). Human-designed graphs and configurable rounds; the chat assistant creates and edits agents, workflows, and tools via natural language.
  • Tools and integrations. Native, HTTP, and MCP tools; RAG and knowledge connectors (Notion, Google Drive, Dropbox, OneDrive, Confluence, GitBook, local folders); Podman sandboxes; OpenAI, Anthropic, Ollama, and remote LLM support; OpenClaw gateway integration.

Agent architecture and orchestration

ReAct (Reasoning + Acting) is the pattern used by most production assistants: one LLM, one context, a fixed set of tools; the model loops over thought, choose tool, act, observe. Used by ChatGPT (function calling), Claude (tools), and the OpenAI Assistants API: single orchestrator, fixed tool set, dynamic tool selection only.

Why Agentron is cutting edge: With heap on, Agentron runs at Level 4 (recursive/self-building): the planner assembles a DAG of specialists per request and can create new tools and agents on the fly. Specialists are real roles in the app (e.g. workflow, agent); the model picks which run and in what order each turn. Without heap, chat uses Level 1 (ReAct + tools). Workflows use Level 2 (static topology, human-designed graphs). Max steps, loop detection, and tool-call budgets are enforced. For the full taxonomy (Levels 1–4), see Agent architectures (comparison) in the docs.

Event-driven architecture

Agentron uses event-driven patterns under the hood for execution and delivery: workflow runs are driven by a DB-backed event queue (RunStarted, NodeRequested, NodeCompleted, UserResponded) with persisted run state for pause/resume and user-in-the-loop; chat turns can be consumed via a pub/sub event channel (SSE) so clients subscribe by turnId and receive the same stream as streaming POST; and workflow execution is queued (DB-backed job queue with bounded concurrency) so start/resume/scheduled runs are serialized and observable. For details, see Event-driven architecture in the docs.

Getting started

Prerequisites

  • Node.js version in .nvmrc (e.g. 22.x)
  • npm or pnpm

Installation

  1. Clone the repo and enter the project: git clone <repo-url> && cd agentron
  2. Install dependencies: npm run install:ui or pnpm install (UI only) or npm install (full, including desktop)
  3. Run the app: npm run dev:ui or pnpm run dev:ui, then open http://localhost:3000

Desktop app: Install Agentron as a standalone Electron app (no Node.js required). Download installers from the Download page or GitHub Releases. The app starts the UI and stores data locally.

Full steps, troubleshooting, and desktop build: INSTALL.md.

Usage

After starting the app, open Chat and try: "What tools do I have?" or "Create a simple agent that says hello." The assistant uses tools to create and edit agents, workflows, and tools. See Quick start in the docs for more prompts.

Project structure

Path Description
packages/ui Next.js UI
packages/runtime Local runtime (agent execution)
packages/core Shared types and utilities
apps/desktop Electron wrapper
installers Local LLM installer scripts

Development

Dependency isolation (UI vs Desktop)

To avoid pulling the Electron toolchain during UI work:

npm run install:ui
npm run dev:ui

When you need desktop packaging:

npm run install:desktop

Match CI locally

Use the same checks as CI before pushing:

  1. Node: Use the version in .nvmrc (e.g. nvm use or fnm use).
  2. pnpm: Repo pins packageManager in package.json; with Corepack enabled (corepack enable) you get the same pnpm version as CI.
  3. Install: Run pnpm install --frozen-lockfile (or at least pnpm install) so dependencies match the lockfile.
  4. Run CI checks: pnpm run ci:local runs format:check, typecheck, lint, test:coverage, file-lengths, build:docs, plus build:ui and desktop dist.

E2E tests with local LLMs (optional)

From repo root:

npm run test:e2e-llm

The script starts Ollama if needed and pulls the default E2E model if missing. Prerequisites: Ollama installed. Optional: Podman for run-code and container scenarios. Default model: Qwen 3 8B (qwen3:8b). Override with E2E_LLM_MODEL (e.g. E2E_LLM_MODEL=qwen2.5:3b npm run test:e2e-llm). These tests are not run in CI.

Model Env Notes
Qwen 3 8B (default) E2E_LLM_MODEL=qwen3:8b Better for heap e2e
Qwen 3 14B E2E_LLM_MODEL=qwen3:14b Larger, higher quality
Qwen 2.5 3B E2E_LLM_MODEL=qwen2.5:3b Faster, smaller
Llama 3.2 E2E_LLM_MODEL=llama3.2 Meta model; script auto-pulls if missing
Phi-3 E2E_LLM_MODEL=phi3 Microsoft small, fast; script auto-pulls if missing

Optional env: OLLAMA_BASE_URL (default http://localhost:11434), E2E_SAVE_ARTIFACTS=1, E2E_LOG_DIR.

Optional dependencies: To build the UI or run tests with coverage, optional deps must be installed. Set optional=true in .npmrc or run npm install --include=optional. CI uses npm install --include=optional.

Documentation

License

See LICENSE.

About

Agentron Studio is an open‑source, local‑first AI command center with its own built‑in agent system that helps you design and evolve adaptable multi‑agent workflows – visually composing complex agent logic, wiring in your own tools and models, hardening everything with evals and guardrails.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors