Summary
Design and implement built-in runners that call LLM APIs directly (Anthropic, OpenAI, etc.) without requiring an external CLI tool.
Parent issue: #105 — Full Inventory, §21 Planned
Why
Currently the only production runner is ClaudeCliRunner, which shells out to the Claude CLI. Native API support enables:
- Lower latency — no CLI process overhead
- Richer control — direct access to sampling parameters, logprobs, streaming
- Multi-provider — prerequisite for model cascading and ensemble patterns (see multi-provider routing)
- Confidence signals — logprobs and token probabilities only available via API
Design Decisions Needed
Spec Reference
- Referenced in
spec/core/s21*.md as a planned feature
- Provider string format in
spec/core/s15*.md
Acceptance Criteria
Summary
Design and implement built-in runners that call LLM APIs directly (Anthropic, OpenAI, etc.) without requiring an external CLI tool.
Parent issue: #105 — Full Inventory, §21 Planned
Why
Currently the only production runner is
ClaudeCliRunner, which shells out to the Claude CLI. Native API support enables:Design Decisions Needed
provider/model(e.g.anthropic/claude-sonnet-4-20250514)Spec Reference
spec/core/s21*.mdas a planned featurespec/core/s15*.mdAcceptance Criteria