Minimal self-evolving agent scheduler with hard isolation between the immutable core and the writable agent workspace.
Use one mental model everywhere:
rules-> stable global policy inprompts/rules.mdroles-> per-agent identity and delegation boundaries inroles/context providers-> current queue/state/environment facts fromcontext.d/skills-> reusable procedures; not part of repo prompt assembly
Repo-root agents/ is retired. Do not add new prompt material there.
marrow-core keeps the l1 / l2 / l3 folder layout as an architectural convention and prompt-shaping aid, not as runtime-enforced metadata.
| Level | Directory | Roles | Responsibility |
|---|---|---|---|
L1 |
roles/l1/ |
scout, conductor, refit |
scheduled monitoring, operational ownership, strategic closure |
L2 |
roles/l2/ |
refactor-lead, prototype-lead, review-lead, ops-lead |
bounded domain ownership with downward delegation |
L3 |
roles/l3/ |
analyst, researcher, coder, tester, writer, git-ops, filer |
tightly scoped execution with no further delegation |
Delegation policy:
L1 -> L2/L3allowedL2 -> L3allowed where declaredL3 -> *forbidden- no upward calls
- one accountable owner per workstream
- max delegation depth: 2 hops
The canonical source of truth is now roles/ plus roles.toml.
roles/stores role prompts, directory layout, and capability declarationsroles.tomlstores model-tier mapping for casting.opencode/agents/is the cast runtime surface generated byrole-forge
marrow run # persistent heartbeat loop
marrow run-once # one tick per scheduled main, then exit
marrow dry-run # assemble prompts without running agents
marrow sync-once # one bounded sync attempt with structured result codes
marrow setup # init workspace dirs and sync role symlinks
marrow scaffold # create a new writable workspace skeleton and starter config
marrow validate # check config and show summary
marrow doctor # verify workspace, context dirs, and agent command availability
marrow status # query live heartbeat state over IPC
marrow install-service # render launchd or systemd service files
marrow task add # submit a task into tasks/queue via IPC
marrow task list # inspect queued tasks via IPC
core_dir = "/opt/marrow-core"
[ipc]
enabled = true
[sync]
enabled = true
interval_seconds = 3600
failure_backoff_seconds = 300
[[agents]]
name = "scout"
heartbeat_interval = 300
heartbeat_timeout = 500
workspace = "/Users/marrow"
agent_command = "/Users/marrow/.opencode/bin/opencode run --agent scout"
context_dirs = ["/Users/marrow/context.d"]
[[agents]]
name = "conductor"
heartbeat_interval = 7200
heartbeat_timeout = 7200
workspace = "/Users/marrow"
agent_command = "/Users/marrow/.opencode/bin/opencode run --agent conductor"
context_dirs = ["/Users/marrow/context.d"]
[[agents]]
name = "refit"
heartbeat_interval = 302400
heartbeat_timeout = 28800
workspace = "/Users/marrow"
agent_command = "/Users/marrow/.opencode/bin/opencode run --agent refit"
context_dirs = ["/Users/marrow/context.d"]Model tiers live in roles.toml.
marrow-core now uses role-forge as the casting runtime. Canonical role files in roles/ are cast into .opencode/agents/, then execution is handed off to the external opencode CLI configured by each agent's agent_command.
That means the effective execution path is:
- edit canonical role definitions in
roles/ - cast them into
.opencode/agents/viarole-forge - launch
opencode run --agent <name>
marrow_core/contracts.py— canonical role inventory and workspace topologymarrow_core/prompting.py— context execution and prompt assemblymarrow_core/runtime.py— socket, queue, and binary path resolutionmarrow_core/task_queue.py— filesystem queue helpersmarrow_core/services.py— launchd/systemd renderingmarrow_core/scaffold.py— workspace scaffold and starter config generationmarrow_core/heartbeat.py,marrow_core/cli.py,marrow_core/ipc.py— orchestration layers
runtime/handoff/scout-to-conductor/
runtime/handoff/conductor-to-scout/
runtime/handoff/scout-to-human/
marrow install-service --config marrow.toml --platform darwin --output-dir ./service-out
marrow install-service --config marrow.toml --platform linux --output-dir ./service-outThe repo uses one long-running service per platform and CLI-managed periodic sync inside marrow run.
Use marrow sync-once for the bounded update path, and marrow install-service only emits the primary runtime service file for each platform.
Install on a fresh machine:
git clone https://github.com/zrr1999/marrow-core.git /opt/marrow-core
cd /opt/marrow-core
sudo ./setup.shWhat this does:
- creates or updates
/opt/marrow-core/.venv - ensures
/Users/marrow/workspace directories exist - casts canonical roles into
/Users/marrow/.opencode/agents/ - renders and installs the single heartbeat service for your platform
Update an existing installation:
cd /opt/marrow-core
python -m marrow_core.cli sync-once --config marrow.tomlResult codes:
0->noop, nothing changed10->reloaded, safe runtime data changed11->restart_required, let the service manager restartmarrow run1->failed, inspect the sync state file and logs
Useful follow-up checks:
python -m marrow_core.cli validate --config marrow.toml
python -m marrow_core.cli install-service --config marrow.toml --platform auto --output-dir ./service-out
python -m marrow_core.cli status --config marrow.tomlCLI-managed periodic sync runs inside the main heartbeat service by spawning marrow sync-once as a subprocess.
That keeps risky update work isolated while preserving one place to observe failures and one service lifecycle to manage.
Runtime role files are no longer hand-written or symlinked into .opencode/agents/.
Instead, marrow-core depends on role-forge main and casts canonical roles/ into
OpenCode output during setup/sync.
Repository-local quality tools are intended to be invoked with uvx rather than pinned as project runtime dependencies. See Justfile for the standard commands for ruff, ty, and prek.
See AGENTS.md for the full contract and filesystem model.