Releases: MachineWisdomAI/fava-trails
v0.5.5
What's Changed
- feat: multi-tool MCP compatibility (Codex, Crush) by @timeleft-- in #35
- chore(deps): Bump pyasn1 from 0.6.2 to 0.6.3 in the uv group across 1 directory by @dependabot[bot] in #34
- chore(deps): Bump requests from 2.32.5 to 2.33.0 in the uv group across 1 directory by @dependabot[bot] in #36
- feat: add codev trust gate integration (Spec 26) by @timeleft-- in #37
- chore(deps): Bump cryptography from 46.0.5 to 46.0.6 in the uv group across 1 directory by @dependabot[bot] in #38
- fix: extend integrate codev to configure project artifacts (TICK 26-001) by @timeleft-- in #39
New Contributors
- @dependabot[bot] made their first contribution in #34
Full Changelog: v0.5.3...v0.5.5
v0.5.4
What's Changed
- feat: multi-tool MCP compatibility (Codex, Crush) by @timeleft-- in #35
- chore(deps): Bump pyasn1 from 0.6.2 to 0.6.3 in the uv group across 1 directory by @dependabot[bot] in #34
- chore(deps): Bump requests from 2.32.5 to 2.33.0 in the uv group across 1 directory by @dependabot[bot] in #36
- feat: add codev trust gate integration (Spec 26) by @timeleft-- in #37
- chore(deps): Bump cryptography from 46.0.5 to 46.0.6 in the uv group across 1 directory by @dependabot[bot] in #38
New Contributors
- @dependabot[bot] made their first contribution in #34
Full Changelog: v0.5.3...v0.5.4
v0.5.3
New in v0.5.x β Lifecycle Hooks and Context Engineering Protocols
Why
Agent memory accumulates noise β across sessions, across team members, across workstreams. Retrieval degrades as memory grows because the memory system has no opinion about quality, freshness, or relevance. v0.5 adds a typed hook system that lets you program lifecycle policy β validate writes, compress on promote, rerank on recall β and three reference protocols adapted from published research.
What you can do now
- Gate writes β reject malformed or low-quality thoughts before they enter memory
- Compress on promote β extractively compress verbose thoughts when they become permanent records
- Rerank retrieval β reorder recall results based on playbook rules that evolve over time
- Track parallel work β validate mapper outputs, monitor batch progress, get "REDUCE READY" signals
- Observe everything β hooks return structured
hook_feedbackin every MCP response
Which protocol should I use?
| Problem | Protocol | Research | Module |
|---|---|---|---|
| Retrieval quality degrades as memory grows | ACE | Stanford, UC Berkeley, and SambaNova, ICLR 2026 | fava_trails.protocols.ace |
| Promoted thoughts are too verbose, diluting recall | SECOM | Tsinghua University and Microsoft, ICLR 2025 | fava_trails.protocols.secom |
| Parallel workers need validated, crash-safe handoffs | RLM | MIT | fava_trails.protocols.rlm |
Quick start
pip install fava-trails
# or
uv add fava-trailsFull CLI for project setup and data repo management
FAVA Trails CLI is a control plane for project initialization, data repo management, diagnostics, thought retrieval, and protocol configuration.
# Project setup
fava-trails init # Creates .fava-trails.yaml and .env in current directory
# Data repo management
fava-trails bootstrap # Create a new data repo from scratch
fava-trails clone <remote-url> # Clone an existing data repo from remote
# Diagnostics and scope management
fava-trails doctor # Validate: Jujutsu, data repo, OpenRouter key, config
fava-trails scope set <scope> # Set current scope
fava-trails scope list # Show all available scopes
# Thought retrieval
fava-trails get --list <scope> --query "pattern" # List thoughts matching query
fava-trails get <thought-id> --with-frontmatter # Retrieve full thought + metadata
fava-trails get --exists <scope>/<thought-id> # Check if thought exists
# Install dependencies
fava-trails install-jj # Download and install Jujutsu binaryProtocol configuration
Pick a protocol and let the CLI handle the rest:
fava-trails ace setup --write # Adds ACE hooks to config.yaml, commits via jj
fava-trails secom setup --write # Adds SECOM hooks, commits via jj
fava-trails rlm setup --write # Adds RLM hooks, commits via jjEach command is idempotent (safe to run twice), validates the data repo, and commits the config change so it's versioned and rollback-safe. Run without --write to preview the YAML.
For SECOM, pre-download the compression model:
fava-trails secom warmupConfigure LLM provider (OpenRouter by default):
The Trust Gate and any hooks that invoke LLM models need an API key. Configure in config.yaml:
openrouter_api_key_env: "OPENROUTER_API_KEY" # environment variable name
trust_gate_model: "google/gemini-2.5-flash" # model for Trust Gate reviewSet the environment variable:
export OPENROUTER_API_KEY="your-openrouter-key"OpenRouter provides unified access to 300β500+ models from 60+ providers including Anthropic, OpenAI, Google, Qwen, Deepseek, and others. Future versions support additional LLM providers.
What's new in detail
Lifecycle Hooks β Event-Action Pipeline
Seven event classes powering eight lifecycle points, with a typed action vocabulary. Hooks return explicit actions instead of booleans.
Events:
BeforeSaveEvent Β· AfterSaveEvent Β· BeforeProposeEvent Β· AfterProposeEvent Β· AfterSupersedeEvent Β· OnRecallEvent Β· OnStartupEvent
Lifecycle points: before_save Β· after_save Β· before_propose Β· after_propose Β· after_supersede Β· on_recall Β· on_recall_mix Β· on_startup
on_recall_mix reuses OnRecallEvent with lifecycle_point="on_recall_mix" β fires after merging results from cross-trail searches so hooks can rerank the full merged set.
Actions:
Proceed Β· Reject(reason) Β· Mutate(ThoughtPatch) Β· Redirect Β· Warn Β· Advise Β· Annotate Β· RecallSelect
Execution model:
before_*andon_recallrun synchronously. Output returned ashook_feedbackin MCP response.after_*hooks run inline on the caller's asyncio task. At-most-once delivery. Feedback is accessible in the MCP response viahook_feedback.- Hooks are read-only. They can gate writes, rerank recall, mutate payloads in-flight, and advise. They cannot call
save_thought,supersede, orpropose_truth. RecallSelectcan subset/rerank recall results. Cannot inject thoughts absent from the original result set.- Loaded once at startup from
config.yaml. Never re-read at runtime. TrailContextprovides lazy read-only access to trail stats and recall (bypasses hooks, capped at 50 results).
Example hook_feedback in an MCP response:
{
"hook_feedback": {
"accepted": true,
"annotations": {
"rlm_batch_id": "b001",
"rlm_mapper_id": "chunk-3",
"rlm_batch_count": 3,
"rlm_expected_mappers": 5,
"rlm_reduce_ready": false
},
"advice": [
{"message": "Mapper 'chunk-3' saved for batch 'b001'. Progress: 3/5 mappers reported.", "code": "rlm_mapper_progress"}
]
}
}Protocol: fava_trails.protocols.ace
Adapted from ACE: Agentic Context Engineering (Stanford, UC Berkeley, and SambaNova, ICLR 2026)
Playbook-driven recall reranking and quality enforcement. Finalized decisions rise above stale drafts. Rules evolve through correction telemetry.
on_startup: Lazy playbook cache initon_recall:RecallSelectreranking β playbook rules score each result, reorder by relevanceon_recall_mix: Same reranking applied to cross-trail merged results (multi-scope searches)before_save: Anti-pattern warnings + brevity advisoriesafter_save: Cache invalidation + reflector telemetry (5-min TTL fallback)after_supersede: Correction telemetry β captures what was wrong and what replaced it
Implements the curation infrastructure. Reasoning and reflection stay application-layer.
Protocol: fava_trails.protocols.secom
Adapted from SECOM (Tsinghua University and Microsoft, ICLR 2025)
Extractive compression at promote time. Compress once, benefit on every subsequent read.
before_propose: Inline compression viaMutate(ThoughtPatch)β only original tokens survivebefore_save: Verbosity advisoryon_recall: Density-awareRecallSelectboosting compressed thoughts
Original text preserved in version history.
Compression engine: LLMLingua-2 (178M-parameter token classifier). Install with pip install fava-trails[secom].
Protocol: fava_trails.protocols.rlm
Adapted from Recursive Language Models (MIT) and Anthropic multi-agent patterns
Mapper validation, batch tracking, deterministic reducer ordering. Every mapper output is a crash-safe atomic commit.
before_save: Validates mapper outputs (requiresmapper_id, rejects malformed)after_save: Per-batch distinct mapper counting, "REDUCE READY" advisory at quorumon_recall: Deterministicmapper_idordering viaRecallSelecton_recall_mix: Same deterministic sort for cross-trail merged results (distributed MapReduce)
fail_mode: closed. The advisory counter is advisory β reducer must verify via recall().
Native LLM Client
Unified LLM invocation for hooks that need model inference (e.g. Trust Gate review, ACE playbook scoring, SECOM compression).
Multi-provider support via any-llm-sdk:
- Primary (default): OpenRouter β access 300β500+ models from 60+ providers including Anthropic, OpenAI, Google, Qwen, and others through a single API. Configure via
OPENROUTER_API_KEYenvironment variable inconfig.yaml. - Extensible: any-llm-sdk supports additional providers (Anthropic, OpenAI, Bedrock, etc.). Future versions can add provider selection to
config.yamlfor seamless switching.
Built-in reliability: Exponential backoff with jitter on rate limits and transient failures. Configurable model aliases for version pinning and fallbacks. Unified exception hierarchy across all providers.
Trust Gate (built-in LLM verification layer): Every promoted thought passes through an independent LLM reviewer before entering shared memory. Configurable model and timeout in config.yaml; defaults to google/gemini-2.5-flash via OpenRouter.
What's Changed
- chore: add agent guardrails and semantic PR check by @timeleft-- in #9
- feat: Replace OpenAI SDK with any-llm-sdk by @timeleft-- in #12
- feat(llm): Extract LLM client library with multi-provider support by @timeleft-- in #8
- feat: Add get CLI subcommand for programmatic artifact retrieval by @timeleft-- in #10
- feat: Add lifecycle hooks system (Spec 17) by @timeleft-- in h...
v0.5.2
Release v0.5.2 β Lifecycle Hooks and Context Engineering Protocols
Why
Agent memory accumulates noise β across sessions, across team members, across workstreams. Retrieval degrades as memory grows because the memory system has no opinion about quality, freshness, or relevance. v0.5 adds a typed hook system that lets you program lifecycle policy β validate writes, compress on promote, rerank on recall β and three reference protocols adapted from published research.
What you can do now
- Gate writes β reject malformed or low-quality thoughts before they enter memory
- Compress on promote β extractively compress verbose thoughts when they become permanent records
- Rerank retrieval β reorder recall results based on playbook rules that evolve over time
- Track parallel work β validate mapper outputs, monitor batch progress, get "REDUCE READY" signals
- Observe everything β hooks return structured
hook_feedbackin every MCP response
Which protocol should I use?
| Problem | Protocol | Research | Module |
|---|---|---|---|
| Retrieval quality degrades as memory grows | ACE | Stanford, UC Berkeley, and SambaNova, ICLR 2026 | fava_trails.protocols.ace |
| Promoted thoughts are too verbose, diluting recall | SECOM | Tsinghua University and Microsoft, ICLR 2025 | fava_trails.protocols.secom |
| Parallel workers need validated, crash-safe handoffs | RLM | MIT | fava_trails.protocols.rlm |
Quick start
pip install fava-trails
# or
uv add fava-trailsFull CLI for project setup and data repo management
FAVA Trails CLI is a control plane for project initialization, data repo management, diagnostics, thought retrieval, and protocol configuration.
# Project setup
fava-trails init # Creates .fava-trails.yaml and .env in current directory
# Data repo management
fava-trails bootstrap # Create a new data repo from scratch
fava-trails clone <remote-url> # Clone an existing data repo from remote
# Diagnostics and scope management
fava-trails doctor # Validate: Jujutsu, data repo, OpenRouter key, config
fava-trails scope set <scope> # Set current scope
fava-trails scope list # Show all available scopes
# Thought retrieval
fava-trails get --list <scope> --query "pattern" # List thoughts matching query
fava-trails get <thought-id> --with-frontmatter # Retrieve full thought + metadata
fava-trails get --exists <scope>/<thought-id> # Check if thought exists
# Install dependencies
fava-trails install-jj # Download and install Jujutsu binaryProtocol configuration
Pick a protocol and let the CLI handle the rest:
fava-trails ace setup --write # Adds ACE hooks to config.yaml, commits via jj
fava-trails secom setup --write # Adds SECOM hooks, commits via jj
fava-trails rlm setup --write # Adds RLM hooks, commits via jjEach command is idempotent (safe to run twice), validates the data repo, and commits the config change so it's versioned and rollback-safe. Run without --write to preview the YAML.
For SECOM, pre-download the compression model:
fava-trails secom warmupConfigure LLM provider (OpenRouter by default):
The Trust Gate and any hooks that invoke LLM models need an API key. Configure in config.yaml:
openrouter_api_key_env: "OPENROUTER_API_KEY" # environment variable name
trust_gate_model: "google/gemini-2.5-flash" # model for Trust Gate reviewSet the environment variable:
export OPENROUTER_API_KEY="your-openrouter-key"OpenRouter provides unified access to 300β500+ models from 60+ providers including Anthropic, OpenAI, Google, Qwen, Deepseek, and others. Future versions support additional LLM providers.
What's new in detail
Lifecycle Hooks β Event-Action Pipeline
Seven event classes powering eight lifecycle points, with a typed action vocabulary. Hooks return explicit actions instead of booleans.
Events:
BeforeSaveEvent Β· AfterSaveEvent Β· BeforeProposeEvent Β· AfterProposeEvent Β· AfterSupersedeEvent Β· OnRecallEvent Β· OnStartupEvent
Lifecycle points: before_save Β· after_save Β· before_propose Β· after_propose Β· after_supersede Β· on_recall Β· on_recall_mix Β· on_startup
on_recall_mix reuses OnRecallEvent with lifecycle_point="on_recall_mix" β fires after merging results from cross-trail searches so hooks can rerank the full merged set.
Actions:
Proceed Β· Reject(reason) Β· Mutate(ThoughtPatch) Β· Redirect Β· Warn Β· Advise Β· Annotate Β· RecallSelect
Execution model:
before_*andon_recallrun synchronously. Output returned ashook_feedbackin MCP response.after_*hooks run inline on the caller's asyncio task. At-most-once delivery. Feedback is accessible in the MCP response viahook_feedback.- Hooks are read-only. They can gate writes, rerank recall, mutate payloads in-flight, and advise. They cannot call
save_thought,supersede, orpropose_truth. RecallSelectcan subset/rerank recall results. Cannot inject thoughts absent from the original result set.- Loaded once at startup from
config.yaml. Never re-read at runtime. TrailContextprovides lazy read-only access to trail stats and recall (bypasses hooks, capped at 50 results).
Example hook_feedback in an MCP response:
{
"hook_feedback": {
"accepted": true,
"annotations": {
"rlm_batch_id": "b001",
"rlm_mapper_id": "chunk-3",
"rlm_batch_count": 3,
"rlm_expected_mappers": 5,
"rlm_reduce_ready": false
},
"advice": [
{"message": "Mapper 'chunk-3' saved for batch 'b001'. Progress: 3/5 mappers reported.", "code": "rlm_mapper_progress"}
]
}
}Protocol: fava_trails.protocols.ace
Adapted from ACE: Agentic Context Engineering (Stanford, UC Berkeley, and SambaNova, ICLR 2026)
Playbook-driven recall reranking and quality enforcement. Finalized decisions rise above stale drafts. Rules evolve through correction telemetry.
on_startup: Lazy playbook cache initon_recall:RecallSelectreranking β playbook rules score each result, reorder by relevanceon_recall_mix: Same reranking applied to cross-trail merged results (multi-scope searches)before_save: Anti-pattern warnings + brevity advisoriesafter_save: Cache invalidation + reflector telemetry (5-min TTL fallback)after_supersede: Correction telemetry β captures what was wrong and what replaced it
Implements the curation infrastructure. Reasoning and reflection stay application-layer.
Protocol: fava_trails.protocols.secom
Adapted from SECOM (Tsinghua University and Microsoft, ICLR 2025)
Extractive compression at promote time. Compress once, benefit on every subsequent read.
before_propose: Inline compression viaMutate(ThoughtPatch)β only original tokens survivebefore_save: Verbosity advisoryon_recall: Density-awareRecallSelectboosting compressed thoughts
Original text preserved in version history.
Compression engine: LLMLingua-2 (178M-parameter token classifier). Install with pip install fava-trails[secom].
Protocol: fava_trails.protocols.rlm
Adapted from Recursive Language Models (MIT) and Anthropic multi-agent patterns
Mapper validation, batch tracking, deterministic reducer ordering. Every mapper output is a crash-safe atomic commit.
before_save: Validates mapper outputs (requiresmapper_id, rejects malformed)after_save: Per-batch distinct mapper counting, "REDUCE READY" advisory at quorumon_recall: Deterministicmapper_idordering viaRecallSelecton_recall_mix: Same deterministic sort for cross-trail merged results (distributed MapReduce)
fail_mode: closed. The advisory counter is advisory β reducer must verify via recall().
Native LLM Client
Unified LLM invocation for hooks that need model inference (e.g. Trust Gate review, ACE playbook scoring, SECOM compression).
Multi-provider support via any-llm-sdk:
- Primary (default): OpenRouter β access 300β500+ models from 60+ providers including Anthropic, OpenAI, Google, Qwen, and others through a single API. Configure via
OPENROUTER_API_KEYenvironment variable inconfig.yaml. - Extensible: any-llm-sdk supports additional providers (Anthropic, OpenAI, Bedrock, etc.). Future versions can add provider selection to
config.yamlfor seamless switching.
Built-in reliability: Exponential backoff with jitter on rate limits and transient failures. Configurable model aliases for version pinning and fallbacks. Unified exception hierarchy across all providers.
Trust Gate (built-in LLM verification layer): Every promoted thought passes through an independent LLM reviewer before entering shared memory. Configurable model and timeout in config.yaml; defaults to google/gemini-2.5-flash via OpenRouter.
Full Changelog: v0.4.12...v0.5.2
v0.5.1 β Lifecycle Hooks and Context Engineering Protocols
Release v0.5.1 β Lifecycle Hooks and Context Engineering Protocols
Why
Agent memory accumulates noise β across sessions, across team members, across workstreams. Retrieval degrades as memory grows because the memory system has no opinion about quality, freshness, or relevance. v0.5 adds a typed hook system that lets you program lifecycle policy β validate writes, compress on promote, rerank on recall β and three reference protocols adapted from published research.
What you can do now
- Gate writes β reject malformed or low-quality thoughts before they enter memory
- Compress on promote β extractively compress verbose thoughts when they become permanent records
- Rerank retrieval β reorder recall results based on playbook rules that evolve over time
- Track parallel work β validate mapper outputs, monitor batch progress, get "REDUCE READY" signals
- Observe everything β hooks return structured
hook_feedbackin every MCP response
Which protocol should I use?
| Problem | Protocol | Research | Module |
|---|---|---|---|
| Retrieval quality degrades as memory grows | ACE | Stanford, UC Berkeley, and SambaNova, ICLR 2026 | fava_trails.protocols.ace |
| Promoted thoughts are too verbose, diluting recall | SECOM | Tsinghua University and Microsoft, ICLR 2025 | fava_trails.protocols.secom |
| Parallel workers need validated, crash-safe handoffs | RLM | MIT | fava_trails.protocols.rlm |
Quick start
pip install fava-trails
# or
uv add fava-trailsFull CLI for project setup and data repo management
FAVA Trails CLI is a control plane for project initialization, data repo management, diagnostics, thought retrieval, and protocol configuration.
# Project setup
fava-trails init # Creates .fava-trails.yaml and .env in current directory
# Data repo management
fava-trails bootstrap # Create a new data repo from scratch
fava-trails clone <remote-url> # Clone an existing data repo from remote
# Diagnostics and scope management
fava-trails doctor # Validate: Jujutsu, data repo, OpenRouter key, config
fava-trails scope set <scope> # Set current scope
fava-trails scope list # Show all available scopes
# Thought retrieval
fava-trails get --list <scope> --query "pattern" # List thoughts matching query
fava-trails get <thought-id> --with-frontmatter # Retrieve full thought + metadata
fava-trails get --exists <scope>/<thought-id> # Check if thought exists
# Install dependencies
fava-trails install-jj # Download and install Jujutsu binaryProtocol configuration
Pick a protocol and let the CLI handle the rest:
fava-trails ace setup --write # Adds ACE hooks to config.yaml, commits via jj
fava-trails secom setup --write # Adds SECOM hooks, commits via jj
fava-trails rlm setup --write # Adds RLM hooks, commits via jjEach command is idempotent (safe to run twice), validates the data repo, and commits the config change so it's versioned and rollback-safe. Run without --write to preview the YAML.
For SECOM, pre-download the compression model:
fava-trails secom warmupConfigure LLM provider (OpenRouter by default):
The Trust Gate and any hooks that invoke LLM models need an API key. Configure in config.yaml:
openrouter_api_key_env: "OPENROUTER_API_KEY" # environment variable name
trust_gate_model: "google/gemini-2.5-flash" # model for Trust Gate reviewSet the environment variable:
export OPENROUTER_API_KEY="your-openrouter-key"OpenRouter provides unified access to 300β500+ models from 60+ providers including Anthropic, OpenAI, Google, Qwen, Deepseek, and others. Future versions support additional LLM providers.
What's new in detail
Lifecycle Hooks β Event-Action Pipeline
Seven event classes powering eight lifecycle points, with a typed action vocabulary. Hooks return explicit actions instead of booleans.
Events:
BeforeSaveEvent Β· AfterSaveEvent Β· BeforeProposeEvent Β· AfterProposeEvent Β· AfterSupersedeEvent Β· OnRecallEvent Β· OnStartupEvent
Lifecycle points: before_save Β· after_save Β· before_propose Β· after_propose Β· after_supersede Β· on_recall Β· on_recall_mix Β· on_startup
on_recall_mix reuses OnRecallEvent with lifecycle_point="on_recall_mix" β fires after merging results from cross-trail searches so hooks can rerank the full merged set.
Actions:
Proceed Β· Reject(reason) Β· Mutate(ThoughtPatch) Β· Redirect Β· Warn Β· Advise Β· Annotate Β· RecallSelect
Execution model:
before_*andon_recallrun synchronously. Output returned ashook_feedbackin MCP response.after_*hooks run inline on the caller's asyncio task. At-most-once delivery. Feedback is accessible in the MCP response viahook_feedback.- Hooks are read-only. They can gate writes, rerank recall, mutate payloads in-flight, and advise. They cannot call
save_thought,supersede, orpropose_truth. RecallSelectcan subset/rerank recall results. Cannot inject thoughts absent from the original result set.- Loaded once at startup from
config.yaml. Never re-read at runtime. TrailContextprovides lazy read-only access to trail stats and recall (bypasses hooks, capped at 50 results).
Example hook_feedback in an MCP response:
{
"hook_feedback": {
"accepted": true,
"annotations": {
"rlm_batch_id": "b001",
"rlm_mapper_id": "chunk-3",
"rlm_batch_count": 3,
"rlm_expected_mappers": 5,
"rlm_reduce_ready": false
},
"advice": [
{"message": "Mapper 'chunk-3' saved for batch 'b001'. Progress: 3/5 mappers reported.", "code": "rlm_mapper_progress"}
]
}
}Protocol: fava_trails.protocols.ace
Adapted from ACE: Agentic Context Engineering (Stanford, UC Berkeley, and SambaNova, ICLR 2026)
Playbook-driven recall reranking and quality enforcement. Finalized decisions rise above stale drafts. Rules evolve through correction telemetry.
on_startup: Lazy playbook cache initon_recall:RecallSelectreranking β playbook rules score each result, reorder by relevanceon_recall_mix: Same reranking applied to cross-trail merged results (multi-scope searches)before_save: Anti-pattern warnings + brevity advisoriesafter_save: Cache invalidation + reflector telemetry (5-min TTL fallback)after_supersede: Correction telemetry β captures what was wrong and what replaced it
Implements the curation infrastructure. Reasoning and reflection stay application-layer.
Protocol: fava_trails.protocols.secom
Adapted from SECOM (Tsinghua University and Microsoft, ICLR 2025)
Extractive compression at promote time. Compress once, benefit on every subsequent read.
before_propose: Inline compression viaMutate(ThoughtPatch)β only original tokens survivebefore_save: Verbosity advisoryon_recall: Density-awareRecallSelectboosting compressed thoughts
Original text preserved in version history.
Compression engine: LLMLingua-2 (178M-parameter token classifier). Install with pip install fava-trails[secom].
Protocol: fava_trails.protocols.rlm
Adapted from Recursive Language Models (MIT) and Anthropic multi-agent patterns
Mapper validation, batch tracking, deterministic reducer ordering. Every mapper output is a crash-safe atomic commit.
before_save: Validates mapper outputs (requiresmapper_id, rejects malformed)after_save: Per-batch distinct mapper counting, "REDUCE READY" advisory at quorumon_recall: Deterministicmapper_idordering viaRecallSelecton_recall_mix: Same deterministic sort for cross-trail merged results (distributed MapReduce)
fail_mode: closed. The advisory counter is advisory β reducer must verify via recall().
Native LLM Client
Unified LLM invocation for hooks that need model inference (e.g. Trust Gate review, ACE playbook scoring, SECOM compression).
Multi-provider support via any-llm-sdk:
- Primary (default): OpenRouter β access 300β500+ models from 60+ providers including Anthropic, OpenAI, Google, Qwen, and others through a single API. Configure via
OPENROUTER_API_KEYenvironment variable inconfig.yaml. - Extensible: any-llm-sdk supports additional providers (Anthropic, OpenAI, Bedrock, etc.). Future versions can add provider selection to
config.yamlfor seamless switching.
Built-in reliability: Exponential backoff with jitter on rate limits and transient failures. Configurable model aliases for version pinning and fallbacks. Unified exception hierarchy across all providers.
Trust Gate (built-in LLM verification layer): Every promoted thought passes through an independent LLM reviewer before entering shared memory. Configurable model and timeout in config.yaml; defaults to google/gemini-2.5-flash via OpenRouter.
What's Changed
- chore: add agent guardrails and semantic PR check by @timeleft-- in #9
- feat: Replace OpenAI SDK with any-llm-sdk by @timeleft-- in #12
- feat(llm): Extract LLM client library with multi-provider support by @timeleft-- in #8
- feat: Add get CLI subcommand for programmatic artifact retrieval by @timeleft-- in #10
- feat: Add lifecycle hooks system (Spec 17) by @timeleft-- in ...
v0.4.12
What's New
New: fava-trails clone command
Clone an existing data repo with proper JJ colocated mode in one step:
fava-trails clone https://github.com/YOUR-ORG/fava-trails-data.git fava-trails-dataImproved: Non-colocated repo detection
If a data repo was cloned without --colocate, the server now raises a clear error with fix instructions instead of crashing.
Improved: fava-trails doctor UX
- Shows where
FAVA_TRAILS_DATA_REPOis resolved from (env var vs default) - Suggests
export FAVA_TRAILS_DATA_REPO=...when using default path
Improved: Clone command hardening (GPT-5.1 Codex review)
- Handles target path being a file
- Creates parent directories for nested paths
- Accurate bookmark tracking status messages
Docs
- Clarified Claude Code CLI vs Claude Desktop MCP config paths
- Updated cross-machine sync docs to use
fava-trails clone - Bootstrap help text directs to
clonefor existing remotes