Summary
When building an agent on top of an externally-managed conversation store (e.g. AWS Bedrock AgentCore Memory, a database-backed chat app, Redis, etc.), there's no supported way to seed prior turns into a fresh ClaudeSDKClient session as role-based messages. The only options today are disk-backed .jsonl session files or stuffing history into system_prompt as text — both with real drawbacks.
Use case
We're running ClaudeSDKClient inside a Bedrock AgentCore runtime (stateless per-invocation container). Conversation history is the source-of-truth in AgentCore Memory (managed service). On each new user turn we want to:
- Load prior turns from AgentCore Memory
- Construct a new
ClaudeSDKClient with that history seeded as proper user/assistant (and ideally tool_use/tool_result) turns
- Call
client.query(new_user_message) and stream the response
The SDK handles everything after step 2 beautifully. It's only the seeding that has no clean path.
Current state
SDK 0.1.56:
ClaudeAgentOptions.resume: str | None exists, but it reads a local .jsonl from ~/.claude/projects/<encoded-cwd>/<session-id>.jsonl. Cross-host / ephemeral-container hostile.
ClaudeAgentOptions.continue_conversation: bool — same disk-backed constraint.
ClaudeSDKClient.query(prompt: str | AsyncIterable[dict[str, Any]]) accepts a dict iterable, but the docs frame this as streaming the current interaction, not seeding prior turns. Unclear whether feeding historical role-based dicts would populate session context or just be re-interpreted.
ClaudeAgentOptions.system_prompt is typed as str | SystemPromptPreset | SystemPromptFile | None — no list-of-content-blocks form, so can't even add cache_control breakpoints around a stuffed history block.
Workarounds we've considered
-
Stuff history into system_prompt as text (e.g. **USER**: ... **ASSISTANT**: ...). This is what Anthropic's own sessions docs recommend ("capture the results you need… and pass them into a fresh session's prompt"). Drawbacks:
- Prompt cache invalidates every turn because system_prompt hash changes.
- Model loses role/turn semantics; can't see tool_use/tool_result pairs from prior turns.
- Arbitrary truncation rules needed to stay under context; loses fidelity.
-
Write AgentCore Memory → local .jsonl → resume=<id>. The file format is undocumented, so minor SDK updates can silently break it. We'd also need to reconstruct tool_use/tool_result pairs correctly or the model gets confused on the first turn.
-
Abandon ClaudeSDKClient and call the anthropic SDK directly, losing the Agent/sub-agent/hook/MCP machinery. Significant regression for apps that use those features.
Proposed API (open to alternatives)
A few shapes that would solve this cleanly:
# Option A: explicit `messages` on options
options = ClaudeAgentOptions(
system_prompt="...",
messages=[
{"role": "user", "content": "..."},
{"role": "assistant", "content": [...with tool_use blocks...]},
{"role": "user", "content": [...with tool_result blocks...]},
],
...
)
# Option B: pluggable session store
options = ClaudeAgentOptions(
session_store=MySessionStore(), # pulls history on connect
session_id="user-42",
...
)
# Option C: clarify query(AsyncIterable[dict]) as the history-seeding path
# and document the dict shape + guarantee that prior-turn dicts are
# treated as context rather than replayed as new queries.
Related issues
Why this matters
The SDK is positioned as a general-purpose Agent SDK (quoting a commenter on #109). For anyone building multi-session chat on top of managed memory services (AgentCore, Vertex AI, LangGraph-style state stores) or custom databases, the current gap forces an unfortunate choice between losing prompt caching, losing tool semantics, or re-implementing the agentic loop outside the SDK. A first-class history-seed API would unlock a large class of production use cases.
Happy to help test a proposed API or contribute a PR if useful guidance emerges.
Summary
When building an agent on top of an externally-managed conversation store (e.g. AWS Bedrock AgentCore Memory, a database-backed chat app, Redis, etc.), there's no supported way to seed prior turns into a fresh
ClaudeSDKClientsession as role-based messages. The only options today are disk-backed.jsonlsession files or stuffing history intosystem_promptas text — both with real drawbacks.Use case
We're running
ClaudeSDKClientinside a Bedrock AgentCore runtime (stateless per-invocation container). Conversation history is the source-of-truth in AgentCore Memory (managed service). On each new user turn we want to:ClaudeSDKClientwith that history seeded as proper user/assistant (and ideally tool_use/tool_result) turnsclient.query(new_user_message)and stream the responseThe SDK handles everything after step 2 beautifully. It's only the seeding that has no clean path.
Current state
SDK 0.1.56:
ClaudeAgentOptions.resume: str | Noneexists, but it reads a local.jsonlfrom~/.claude/projects/<encoded-cwd>/<session-id>.jsonl. Cross-host / ephemeral-container hostile.ClaudeAgentOptions.continue_conversation: bool— same disk-backed constraint.ClaudeSDKClient.query(prompt: str | AsyncIterable[dict[str, Any]])accepts a dict iterable, but the docs frame this as streaming the current interaction, not seeding prior turns. Unclear whether feeding historical role-based dicts would populate session context or just be re-interpreted.ClaudeAgentOptions.system_promptis typed asstr | SystemPromptPreset | SystemPromptFile | None— no list-of-content-blocks form, so can't even addcache_controlbreakpoints around a stuffed history block.Workarounds we've considered
Stuff history into
system_promptas text (e.g.**USER**: ... **ASSISTANT**: ...). This is what Anthropic's own sessions docs recommend ("capture the results you need… and pass them into a fresh session's prompt"). Drawbacks:Write AgentCore Memory → local
.jsonl→resume=<id>. The file format is undocumented, so minor SDK updates can silently break it. We'd also need to reconstruct tool_use/tool_result pairs correctly or the model gets confused on the first turn.Abandon
ClaudeSDKClientand call theanthropicSDK directly, losing the Agent/sub-agent/hook/MCP machinery. Significant regression for apps that use those features.Proposed API (open to alternatives)
A few shapes that would solve this cleanly:
Related issues
get_session_messages/list_sessions/get_session_infoin 0.1.46). Solved reading existing sessions, not seeding new ones.connect(prompt=history_stream); no documented answer.Why this matters
The SDK is positioned as a general-purpose Agent SDK (quoting a commenter on #109). For anyone building multi-session chat on top of managed memory services (AgentCore, Vertex AI, LangGraph-style state stores) or custom databases, the current gap forces an unfortunate choice between losing prompt caching, losing tool semantics, or re-implementing the agentic loop outside the SDK. A first-class history-seed API would unlock a large class of production use cases.
Happy to help test a proposed API or contribute a PR if useful guidance emerges.