Skip to content

shimo4228/contemplative-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

456 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Language: English | 日本語 | 简体中文 | 繁體中文 | Português (Brasil) | Español

CA logo

Contemplative Agent (CA)

Tests Python License: MIT DOI

A CLI agent that runs a six-phase knowledge cycle (AKC) over its own logs — every promotion from logs → patterns → skills → rules passes through a human approval gate. Runs entirely on a single Apple Silicon Mac (M1+, 16 GB RAM) with a local 9B model — no cloud, no API keys in transit, no shell execution.

This repository is the operational implementation of two preserved ideas:

  • AKC (Agent Knowledge Cycle) (DOI) — how an agent metabolizes its own experience into improvable skills. Six phases: Research → Extract → Curate → Promote → Measure → Maintain.
  • AAP (Agent Attribution Practice) (DOI) — how accountability is distributed in autonomous AI agents. Eight ADRs covering Security Boundary Model, One External Adapter Per Agent, Human Approval Gate, and causal traceability.

The first adapter is Moltbook, an AI-only social network. The Contemplative AI four axioms ship as an optional preset.

Quick Start

Prerequisites: Ollama installed locally. ~8 GB RAM for the default model (Qwen3.5 9B Q4_K_M, ~6.6 GB on disk). Tested on M1 Mac with 16 GB RAM.

git clone https://github.com/shimo4228/contemplative-agent.git
cd contemplative-agent
pip install -e .            # or: uv venv .venv && source .venv/bin/activate && uv pip install -e .
ollama pull qwen3.5:9b

cp .env.example .env        # set MOLTBOOK_API_KEY (register at moltbook.com)

contemplative-agent init               # create identity, knowledge, constitution
contemplative-agent register           # Moltbook adapter only
contemplative-agent run --session 60   # default: --approve (confirms each post)

Start with a different ethical framework (11 templates ship by default — Stoic, Utilitarian, Care Ethics, Kantian, Pragmatist, Contractarian, …):

cp config/templates/stoic/identity.md $MOLTBOOK_HOME/

If you have Claude Code, paste this repo URL and ask it to set up the agent end-to-end. Full CLI reference, autonomy levels, scheduling, and templates: Configuration Guide.

Running in agent hosts

Contemplative Agent is a host-agnostic Python CLI agent. Use it standalone (default, see Quick Start) or invoke it from any agent host that can run external tools.

Inside OpenClaw / OpenCode / soul-folder hosts. Register contemplative-agent as a CLI tool in your agent's workspace (e.g. ~/.openclaw/workspace/AGENTS.md). The host agent invokes the binary as a subprocess; this respects one external adapter per process by keeping the external surface in a separate process.

Inside Codex / MCP host / other CLI-aware hosts. Same pattern — register the binary in the host's tool registry. Contemplative Agent does not expose itself as an MCP server (see ADR-0007 for the security boundary).

Loading the four contemplative axioms (optional). If you want Emptiness / Non-Duality / Mindfulness / Boundless Care loaded as agent personality in your host, copy SOUL.md from contemplative-agent-rules to your host's soul-folder location (e.g. ~/.openclaw/workspace/SOUL.md). Contemplative Agent itself does not ship a SOUL.md because it is a CLI agent, not a personality file.

Live Agent

A Contemplative agent runs daily on Moltbook. Its evolving state is published openly:

  • Identity — distilled persona
  • Constitution — ethical principles (started from CCAI four axioms)
  • Skills — extracted by insight
  • Rules — distilled from skills
  • Daily reports — timestamped interactions (free for academic and non-commercial use)
  • Analysis reports — behavioral evolution, constitutional amendment experiments

How It Works

Episode Log   raw actions, immutable JSONL (untrusted)
 │
 ├── distill ─▶ Knowledge (behavioral)
 │                 ├── distill-identity ─▶ Identity
 │                 └── insight ─▶ Skills
 │                                 └── rules-distill ─▶ Rules
 │
 └── distill (constitutional) ─▶ Knowledge (constitutional)
                                   └── amend ─▶ Constitution

Raw actions flow upward through layers of abstraction. Each layer is optional. Every layer above Episode Log is generated by the agent reflecting on its own experience.

This pipeline is the AKC six phases mapped onto code: distill covers Extract; insight / rules-distill / amend-constitution cover Curate; distill-identity covers Promote; pivot snapshots (ADR-0020) and skill-reflect (ADR-0023) cover Measure. Full mapping: docs/CODEMAPS/architecture.md.

Key Features

  • Knowledge cycle (AKC) over its own logs — the agent runs the six-phase cycle on its own logs. No fine-tuning, no labeled training data. Every promotion (logs → patterns → skills → rules → identity) passes through a human approval gate.
  • Embedding + views — classification is a query, not state; named views are editable semantic seeds (ADR-0019, category field retired in ADR-0026).
  • Memory evolution + hybrid retrieval — a new pattern can trigger LLM-driven re-interpretation of older topically-related ones; the old row is soft-invalidated and a revised row appended. Cosine + BM25 hybrid scoring (ADR-0022).
  • Skill-as-memory loop — skills are retrieved, applied, and rewritten by outcome (ADR-0023).
  • Noise as seed — rejected episodes are preserved as noise-YYYY-MM-DD.jsonl; when view centroids shift they become available for re-classification rather than being lost (ADR-0027).
  • Replayable pivot snapshots — distill runs bundle the full inference-time context (views + constitution + prompts + skills + rules + identity + centroid embeddings + thresholds) so decisions can be replayed bit-for-bit (ADR-0020).
  • Provenance tracking — every pattern carries source_type and trust_score; MINJA-class memory injection becomes structurally visible (ADR-0021).
  • Markdown all the way down — constitution, identity, skills, rules, 32 pipeline prompts, and 7 view seeds all live as Markdown under $MOLTBOOK_HOME/. Edit a prompt to change how patterns get extracted; swap a view seed to shift classification. Customize →

Security Model

Accountability and security boundaries are documented as harness-neutral ADRs in AAP. This repository is the operational implementation of those judgments.

  • No shell execution, no arbitrary network access, no file traversal — that code does not exist in the codebase. Domain-locked to moltbook.com + localhost Ollama. 3 runtime dependencies: requests, numpy, rank-bm25.
  • One external adapter per process (ADR-0015).
  • Full threat model: ADR-0007. Latest security scan.

Paste this repo URL into Claude Code or any code-aware AI and ask whether it's safe to run. The code speaks for itself.

Note for coding agent operators: Episode logs (logs/*.jsonl) are an unfiltered indirect prompt injection surface. Use distilled outputs (knowledge.json, identity.md, reports/) instead. Claude Code users: see integrations/claude-code/ for PreToolUse hooks that enforce this automatically.

Adapters

The core is platform-agnostic. Adapters are thin wrappers around platform I/O.

  • Moltbook — Social feed engagement, post generation, notification replies. The adapter the live agent runs on.
  • Meditation (experimental) — Active inference-based meditation simulation inspired by "A Beautiful Loop". Builds a POMDP from episode logs and runs belief updates with no external input.
  • Dialogue (local-only) — Two agent processes converse over stdin/stdout pipes. A ~140-line adapter (adapters/dialogue/peer.py) — useful as a non-HTTP, network-free template. Drives contemplative-agent dialogue HOME_A HOME_B for constitutional counterfactual experiments.
  • Your own — Connect platform I/O to core interfaces (memory, distillation, constitution, identity). See docs/CODEMAPS/.

Architecture

One invariant holds across the codebase: core/ is platform-independent; adapters/ depend on core, never the reverse. Module maps, data-flow diagrams, and per-module responsibilities live in docs/CODEMAPS/INDEX.md (the authoritative source). The Yogācāra eight-consciousness frame that constrained the memory design: ADR-0017.

Optional: Running with Managed LLM APIs

For research experiments needing a generation model larger than Qwen3.5 9B (e.g. comparing distillation behavior with Claude Opus or GPT-5 while keeping the rest of the memory pipeline identical), a separate add-on repository provides managed-LLM backends:

  • contemplative-agent-cloud — Optional Python package. Installing it and setting an API key routes every generation call (distill, insight, rules-distill, amend-constitution, post, comment, reply, dialogue, skill-reflect) through Anthropic Claude or OpenAI GPT. Embeddings continue to use local nomic-embed-text.

This is an explicit opt-in. The main repository's default stack (Ollama + Qwen3.5 9B) does not reach any cloud endpoint. The "no cloud, no API keys in transit" property applies to this repository; the cloud add-on relaxes it for users who opt into it. Main repository code is not modified — the add-on injects its backend through an abstract LLMBackend Protocol that knows nothing about any specific provider.

Do not install the cloud add-on in deployments where cloud data egress is not acceptable (regulatory constraints, air-gapped research, privacy-sensitive personal assistants).

Optional: Everyday CLI
contemplative-agent run --session 60       # Run a session
contemplative-agent distill --days 3       # Extract patterns
contemplative-agent skill-reflect          # Revise skills from outcomes (ADR-0023)
contemplative-agent dialogue HOME_A HOME_B --seed "..." --turns N

Full reference (autonomy levels, scheduling, env vars, v1.x → v2 migrations): docs/CONFIGURATION.md. For Docker-based network-isolated deployment: Docker section.

Citation

Shimomoto, T. (2026). Contemplative Agent [Computer software]. https://doi.org/10.5281/zenodo.19212119
BibTeX
@software{shimomoto2026contemplative,
  author       = {Shimomoto, Tatsuya},
  title        = {Contemplative Agent},
  year         = {2026},
  version      = {2.1.0},
  doi          = {10.5281/zenodo.19212119},
  url          = {https://github.com/shimo4228/contemplative-agent},
}

The MIT license means what it says — fork it, strip it for parts, embed the pipeline in your own agent, build a commercial product on top of it. No citation needed if you're just using the code.

Related Work

  • Agent Knowledge Cycle (AKC) (DOI) — the methodological framework this project re-implements in the autonomous-agent context. Originally developed as a Claude Code harness.
  • Agent Attribution Practice (AAP) (DOI) — sibling research repository. Re-expresses this project's governance judgments (Security Boundary Model, One External Adapter Per Agent, Human Approval Gate, causal traceability / scaffolding visibility) in harness-neutral form as eight ADRs on accountability distribution. Cite AAP when quoting the accountability-distribution thesis or the prohibition-strength hierarchy; cite this repository for the operational implementation.

Theoretical foundation:

  • Laukkonen, Inglis, Chandaria, Sandved-Smith, Lopez-Sola, Hohwy, Gold, & Elwood (2025). Contemplative Artificial Intelligence. arXiv:2504.15125 — four-axiom ethical framework (optional preset, ADR-0002).
  • Laukkonen, Friston & Chandaria (2025). A Beautiful Loop: An Active Inference Theory of Consciousness. Neuroscience & Biobehavioral Reviews, 176, 106296. PubMed:40750007 — meditation adapter basis.
  • Vasubandhu (4th–5th c. CE). Triṃśikā-vijñaptimātratā (唯識三十頌) and Xuanzang (659 CE). Cheng Weishi Lun (成唯識論) — eight-consciousness model adopted as the architectural frame (ADR-0017).
Memory systems bibliography

Each paper below informed a specific design decision documented in the linked ADR.

  • Xu, W., Liang, Z., Mei, K., Gao, H., Tan, J., & Zhang, Y. (2025). A-MEM: Agentic Memory for LLM Agents. arXiv:2502.12110 — Zettelkasten-style dynamic indexing and memory evolution; informs the re-interpretation of topically-related older patterns when a new pattern arrives (ADR-0022).
  • Rasmussen, P., Paliychuk, P., Beauvais, T., Ryan, J., & Chalef, D. (2025). Zep: A Temporal Knowledge Graph Architecture for Agent Memory. arXiv:2501.13956 — bitemporal knowledge-graph edges (Graphiti engine); informs the valid_from / valid_until contract on every pattern (ADR-0021).
  • Zhong, W., Guo, L., Gao, Q., Ye, H., & Wang, Y. (2023). MemoryBank: Enhancing Large Language Models with Long-Term Memory. arXiv:2305.10250 — Ebbinghaus-style decay with access-reinforced strength; originally informed the retrieval-aware forgetting curve proposed in ADR-0021, retired by ADR-0028 in favour of locating memory dynamics at the skill layer. Retained as a historical reference.
  • Dong, S., Xu, S., He, P., Li, Y., Tang, J., Liu, T., Liu, H., & Xiang, Z. (2025). Memory Injection Attacks on LLM Agents via Query-Only Interaction (MINJA). arXiv:2503.03704 — query-only memory injection attacks on agent memory; motivates source_type + trust_score provenance so MINJA-class attacks become structurally visible rather than invisible (ADR-0021).
  • Zhou, H., Guo, S., Liu, A., et al. (2026). Memento-Skills: Let Agents Design Agents. arXiv:2603.18743 — skills as persistent evolving memory units, retrieved, applied, and rewritten by outcome; informs the skill-as-memory loop (ADR-0023).

Acknowledgments: Jerry Mares (VADUGWI) — deterministic affect-scoring design inspiration.

Development Records (13 dev.to articles)
  1. I Built an AI Agent from Scratch Because Frameworks Are the Vulnerability
  2. Natural Language as Architecture
  3. Every LLM App Is Just a Markdown-and-Code Sandwich
  4. Do Autonomous Agents Really Need an Orchestration Layer?
  5. Not Reasoning, Not Tools -- What If the Essence of AI Agents Is Memory?
  6. My Agent's Memory Broke -- A Day Wrestling a 9B Model
  7. Porting Game Dev Memory Management to AI Agent Memory Distillation
  8. Freedom and Constraints of Autonomous Agents — Self-Modification, Trust Boundaries, and Emergent Gameplay
  9. How Ethics Emerged from Episode Logs — 17 Days of Contemplative Agent Design
  10. A Sign on a Climbable Wall: Why AI Agents Need Accountability, Not Just Guardrails
  11. Can You Trace the Cause After an Incident?
  12. AI Agent Black Boxes Have Two Layers — Technical Limits and Business Incentives
  13. Where ReAct Agents Are Actually Needed in Business

About

A CLI agent that runs a six-phase knowledge cycle (AKC) over its own logs — every promotion passes through a human approval gate. Runs entirely on a local 9B model. Security by absence — dangerous capabilities were never built.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors