Skip to content

Design & implement caching / memoization #125

@AlexChesser

Description

@AlexChesser

Summary

Design caching and memoization for context steps and LLM responses. This is NOT IN SPEC and requires spec authoring.

Parent issue: #105 — Missing Modality D

Why

If a context: shell: step returns the same result as last run (e.g., cargo test with no code changes), reuse it. If a prompt+context hash matches a prior run, optionally reuse the LLM response. This saves significant cost and latency in iterative development, where you re-run pipelines frequently with small changes.

Design Decisions Needed

  • Cache key strategy — content hash of inputs? File modification times? Explicit cache keys?
  • Cache scope — per-run? Per-session? Persistent across runs?
  • Invalidation — TTL? Content-based? Manual cache: false override?
  • What's cacheable — context: shell: output? LLM responses? Both?
  • Storage — in-memory? On-disk in ~/.ail/cache/?
  • YAML syntax — step-level cache: directive? Pipeline-level cache policy?
  • How cached vs. fresh results are distinguished in the turn log

Spec Work Required

New spec section needed for caching/memoization.

Acceptance Criteria

  • Spec section authored
  • Context steps can be cached based on input hash
  • LLM responses can optionally be cached
  • Cache hit/miss recorded in turn log
  • Steps can opt out of caching

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions