You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Spun out of #262 (originally framed as an OAuth token caching problem; the actual issue there turned out to be an -http port collision in mcp-proxy's approval listener, unrelated to OAuth).
This issue tracks the separate, still-valid idea of OAuth token caching for remote MCP servers.
Why it's worth doing
When mcp-proxy wraps a remote OAuth-protected MCP server (e.g. Atlassian, Slack, Linear), the OAuth flow today is owned entirely by the spawned mcp-remote child process. That has two visible costs:
Cold-start browser round-trip on every fresh session — slow and disruptive UX.
No token / refresh failed → full OAuth dance, write cache on success
The architectural question
Token caching only makes sense in mcp-proxy if mcp-proxy owns the OAuth flow. Today it doesn't — mcp-remote does. So this issue is blocked on / scoped against#227 (native HTTP/SSE transport for remote MCP servers).
(b) Investigate mcp-remote's existing cache.mcp-remote already caches under ~/.mcp-auth/. Worth checking whether the cache key correctly isolates per-server and survives concurrent sessions. If it works, this is a docs/known-good-pattern fix rather than new code.
Proposed scope
Token cache layout (when we own it):
~/.agent-receipts/oauth/<name>.token.json — keyed by -name
Permissions 0600, sits alongside the existing PEM key
Background
Spun out of #262 (originally framed as an OAuth token caching problem; the actual issue there turned out to be an
-httpport collision in mcp-proxy's approval listener, unrelated to OAuth).This issue tracks the separate, still-valid idea of OAuth token caching for remote MCP servers.
Why it's worth doing
When
mcp-proxywraps a remote OAuth-protected MCP server (e.g. Atlassian, Slack, Linear), the OAuth flow today is owned entirely by the spawnedmcp-remotechild process. That has two visible costs:mcp-remoteruns its own callback listener on a port; concurrent sessions can collide there too (separate from mcp-proxy: -http port collision crashes second concurrent session #262's issue, which is mcp-proxy's own approval port).Caching the token on disk after first auth, keyed by
-name, would let subsequent sessions skip the browser dance entirely:The architectural question
Token caching only makes sense in mcp-proxy if mcp-proxy owns the OAuth flow. Today it doesn't —
mcp-remotedoes. So this issue is blocked on / scoped against #227 (native HTTP/SSE transport for remote MCP servers).Two paths forward:
mcp-remote's existing cache.mcp-remotealready caches under~/.mcp-auth/. Worth checking whether the cache key correctly isolates per-server and survives concurrent sessions. If it works, this is a docs/known-good-pattern fix rather than new code.Proposed scope
Token cache layout (when we own it):
~/.agent-receipts/oauth/<name>.token.json— keyed by-name0600, sits alongside the existing PEM key{access_token, refresh_token, expires_at, issuer, scopes}Decision tree on startup:
Acceptance (post #227)
0600Related
-httpport collision)