Summary
Design caching and memoization for context steps and LLM responses. This is NOT IN SPEC and requires spec authoring.
Parent issue: #105 — Missing Modality D
Why
If a context: shell: step returns the same result as last run (e.g., cargo test with no code changes), reuse it. If a prompt+context hash matches a prior run, optionally reuse the LLM response. This saves significant cost and latency in iterative development, where you re-run pipelines frequently with small changes.
Design Decisions Needed
Spec Work Required
New spec section needed for caching/memoization.
Acceptance Criteria
Summary
Design caching and memoization for context steps and LLM responses. This is NOT IN SPEC and requires spec authoring.
Parent issue: #105 — Missing Modality D
Why
If a
context: shell:step returns the same result as last run (e.g.,cargo testwith no code changes), reuse it. If a prompt+context hash matches a prior run, optionally reuse the LLM response. This saves significant cost and latency in iterative development, where you re-run pipelines frequently with small changes.Design Decisions Needed
cache: falseoverride?context: shell:output? LLM responses? Both?~/.ail/cache/?cache:directive? Pipeline-level cache policy?Spec Work Required
New spec section needed for caching/memoization.
Acceptance Criteria