Skip to content

fix: swap input/output tokens for Google GenAI#223

Merged
samuelrince merged 3 commits intomainfrom
fix/swap-google-genai-tokens
Apr 2, 2026
Merged

fix: swap input/output tokens for Google GenAI#223
samuelrince merged 3 commits intomainfrom
fix/swap-google-genai-tokens

Conversation

@samuelrince
Copy link
Copy Markdown
Member

Summary

Fix issue #222: input and output token counts were swapped in the Google GenAI tracer. The code was incorrectly assigning candidates_token_count (generated/output tokens) to input_tokens and deriving input via subtraction from total.

Changes

Use the API fields directly instead of deriving via subtraction:

  • output_tokens = response.usage_metadata.candidates_token_count (generated tokens)
  • input_tokens = response.usage_metadata.prompt_token_count (prompt tokens)

This fixes all 4 code paths (sync, stream, async, async stream) and is more robust than subtraction-based derivation.

Testing

All 6 active Google GenAI tests pass. Verified with mock response that token values are now assigned correctly.

Use prompt_token_count and candidates_token_count directly instead of
deriving via subtraction, fixing the swapped input/output token counts
reported in issue #222.
Gemini thinking models report thoughts_token_count separately from
candidates_token_count. Since reasoning tokens are generated by the
model, they should be counted as output tokens for energy estimation.
@samuelrince samuelrince marked this pull request as ready for review April 2, 2026 16:53
@samuelrince samuelrince merged commit f29c207 into main Apr 2, 2026
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant