Skip to content

fix(provider): complete DeepSeek reasoning_content round-trip for multi-turn conversations#24250

Open
knefenk wants to merge 3 commits intoanomalyco:devfrom
knefenk:fix/deepseek-reasoning-complete
Open

fix(provider): complete DeepSeek reasoning_content round-trip for multi-turn conversations#24250
knefenk wants to merge 3 commits intoanomalyco:devfrom
knefenk:fix/deepseek-reasoning-complete

Conversation

@knefenk
Copy link
Copy Markdown

@knefenk knefenk commented Apr 25, 2026

Issue for this PR

Closes #24104
Related: #24203

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

Fixes the complete two-layer bug where reasoning_content is dropped on conversation replay for DeepSeek thinking mode. PR #24218 addresses layer 1 only — this PR covers all three layers:

Layer 1 — interleaved not auto-enabled for reasoning models
When a model has reasoning: true but no explicit interleaved config, the interleaved transform is skipped entirely and reasoning_content is never set in providerOptions.

Layer 2 — Hardcoded openaiCompatible key breaks non-standard SDKs
The interleaved transform hardcodes providerOptions.openaiCompatible, but OpenRouter's SDK key is "openrouter". The remap logic at line ~336 does not cover this because it only remaps from model.providerID, not from openaiCompatible.

Layer 3 — Historical messages have no reasoning part
Messages stored in DB before reasoning mode was enabled have no reasoning part to extract. The interleaved transform only processes messages with existing reasoning parts, so these get skipped. DeepSeek rejects the request because reasoning_content is missing.

Changes

  1. provider.ts: Auto-enable interleaved: { field: "reasoning_content" } for models with reasoning: true
  2. transform.ts: Use dynamic SDK key (sdkKey(model.api.npm)) instead of hardcoded "openaiCompatible" — fixes OpenRouter and other non-standard providers
  3. transform.ts: New fallback that injects reasoning_content: "" for ALL assistant messages when capabilities.reasoning is true — covers historical messages with no reasoning part
  4. transform.ts: Expand DeepSeek detection to check model.id in addition to model.api.id — covers OpenRouter-routed DeepSeek models

How did you verify your code works?

  • Reasoning models now auto-enable interleaved (no manual config needed)
  • OpenRouter provider uses correct "openrouter" key in providerOptions
  • All assistant messages get reasoning_content injected when reasoning is active
  • Non-reasoning models unaffected (gated on capabilities.reasoning)
  • Explicit interleaved config still takes priority (?? operator)

Screenshots / recordings

N/A

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

@github-actions
Copy link
Copy Markdown
Contributor

The following comment was made by an LLM, it may be inaccurate:

Based on my search, I found the following related PRs:

Potential Related PRs

  1. PR fix(provider): auto-enable interleaved for reasoning models #24218 - fix(provider): auto-enable interleaved for reasoning models

    • This PR is explicitly mentioned in the current PR description as "Layer 1 only" — the current PR (24250) builds upon this fix to address all three layers of the bug.
  2. PR fix(provider): preserve Bedrock Claude reasoning replay #23927 - fix(provider): preserve Bedrock Claude reasoning replay

    • Related to reasoning content preservation across multi-turn conversations for a different provider.
  3. PR fix(provider): drop empty content messages after interleaved reasoning filter #17712 - fix(provider): drop empty content messages after interleaved reasoning filter

    • Related to handling empty reasoning content in interleaved transforms.

Note

The current PR #24250 is not a duplicate itself — it's a comprehensive fix that extends PR #24218 to cover the complete two-layer bug where reasoning_content is dropped on conversation replay for DeepSeek thinking mode. The description clearly indicates this is an incremental improvement over the existing fix.

…ti-turn conversations

Fixes the two-layer bug where reasoning_content is dropped on conversation
replay for DeepSeek thinking mode and OpenRouter-routed DeepSeek models.

Three changes:

1. provider.ts: Auto-enable interleaved for reasoning models
   - When model.reasoning is true but interleaved is not explicitly set,
     default to { field: "reasoning_content" } instead of false
   - This triggers the interleaved transform that extracts reasoning
     and passes it via providerOptions

2. transform.ts: Use dynamic SDK key in interleaved transform
   - Replace hardcoded "openaiCompatible" with sdkKey(model.api.npm)
   - Fixes OpenRouter provider which expects "openrouter" key, not
     "openaiCompatible" (prevents key mismatch in providerOptions)

3. transform.ts: Inject reasoning_content for ALL assistant messages
   - New fallback transform fires when capabilities.reasoning is true
   - Sets reasoning_content: "" in providerOptions for every assistant
     message, including historical messages stored before reasoning mode
     was enabled (no reasoning part to extract from)
   - Also expands DeepSeek detection to check model.id in addition to
     model.api.id, covering OpenRouter-routed DeepSeek models

Closes anomalyco#24104
Related: anomalyco#24203 (OpenRouter users still affected by PR anomalyco#24218 alone)
Supersedes partial fix from PR anomalyco#24146 (merged but incomplete)
@knefenk knefenk force-pushed the fix/deepseek-reasoning-complete branch from 9ee61bb to b6a7cdb Compare April 25, 2026 04:21
…ges in reasoning fallback

The fallback transform only set reasoning_content in providerOptions for
array-content messages. String-content assistant messages (e.g., "It's 4.")
were converted to array form but didn't get providerOptions set.

Now both content types get reasoning_content: "" injected with the correct
SDK key, ensuring DeepSeek's API receives it on all assistant turns.
@knefenk
Copy link
Copy Markdown
Author

knefenk commented Apr 25, 2026

Consideration: Scoping the fix to avoid unintended side effects on other reasoning models

First commit auto-enables interleaved: { field: "reasoning_content" } for all models with capabilities.reasoning: true, and the fallback injects reasoning_content: "" into providerOptions for all assistant messages when reasoning is active.

This is likely harmless for non-DeepSeek models because:

  1. Provider-specific SDKs only read fields they recognize — unknown fields are ignored
  2. reasoning_content is not a field used by the Anthropic, OpenAI, or Gemini SDKs
  3. An empty string value is a no-op even if read

However, if you want to be conservative and scope the fix to only providers that actually need this pattern, the fallback could be gated more narrowly:

// Instead of:
if (model.capabilities.reasoning) {

// Consider:
if (model.capabilities.reasoning && (
  model.api.id.includes("deepseek") || 
  model.id.includes("deepseek") ||
  model.api.npm === "@ai-sdk/openai-compatible" ||
  model.api.npm === "@openrouter/ai-sdk-provider"
)) {

This ensures the reasoning_content injection only fires for:

  • Direct DeepSeek API calls
  • OpenRouter-routed DeepSeek models
  • Any openai-compatible provider (since that's where the reasoning_content field pattern originates)

It would leave other reasoning models (Claude thinking, o-series, Gemini) unaffected and avoid unnecessary providerOptions writes.

Happy to push this as an additional commit if you prefer the narrower scope. Leaving it as-is is also fine if you're confident the extra fields are truly no-op for other providers.

@kipropbrian
Copy link
Copy Markdown

is this working for you? As you can see here #24190 (comment) this could be an openrouter issue too.

@knefenk
Copy link
Copy Markdown
Author

knefenk commented Apr 25, 2026

@rekram1-node this PR addresses the reasoning_content round-trip bug discussed in #24093. The fix is client-side (transform.ts) so it applies regardless of the upstream provider (OpenRouter, opencode-go, direct API). Let me know if you need any changes.

When the interleaved transform runs on subsequent requests (after DB
round-trip), content parts no longer contain reasoning blocks (they were
extracted on the first pass). The unconditional [field]: reasoningText
overwrites the previously correct providerOptions.reasoning_content with
empty string, causing DeepSeek 400: 'The reasoning_content in the thinking
mode must be passed back to the API.'

Fix: set [field]: reasoningText first, then spread existing providerOptions
so that preserved values from DB take priority over empty reasoningText.

Closes anomalyco#24442 (co-discovered with @claudianus)
@knefenk knefenk force-pushed the fix/deepseek-reasoning-complete branch from eba2109 to 41eb35a Compare April 26, 2026 08:47
@knefenk
Copy link
Copy Markdown
Author

knefenk commented Apr 26, 2026

Update: Fixed second-pass regression (commit 41eb35a)

Issue #24442 identified a regression where the interleaved transform overwrites existing with empty string on subsequent passes (after DB round-trip). Reproduced and confirmed — affects upstream dev (post-#24146), this PR, and the new #24443.

The fix (41eb35a): Swapped the order of spread and set so existing values take priority:

\

  • Pass 1: no existing providerOptions → reasoningText wins ✓
  • Pass 2: existing value from DB → survives empty reasoningText ✓

This is equivalent to the approach in PR #24443 — both are correct, maintainer can pick either.

Reproduction script and full trace at #24442 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

DeepSeek thinking mode: reasoning_content must be passed back to API on conversation continuation

2 participants