feat: add turn_id to Event for grouping streaming chunks by LLM call#4816
feat: add turn_id to Event for grouping streaming chunks by LLM call#4816ferponse wants to merge 1 commit intogoogle:mainfrom
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Response from ADK Triaging Agent Hello @ferponse, thank you for creating this PR! This PR is a new feature, could you please create a github issue for this PR and associate it with this PR? This information will help reviewers to review your PR more efficiently. Thanks! |
There was a problem hiding this comment.
Code Review
This pull request introduces a turn_id to group streaming event chunks, which is a great feature for consumers. The implementation for the run_async flow looks correct and is well-tested. However, I've found a critical issue in the implementation for the run_live (BIDI streaming) flow where the turn_id is incremented incorrectly. I've also noted that the new tests don't cover this live streaming path, which would be important to add to prevent regressions.
| while True: | ||
| async with Aclosing(llm_connection.receive()) as agen: | ||
| async for llm_response in agen: | ||
| turn_id += 1 |
There was a problem hiding this comment.
This line incorrectly increments turn_id for every streaming chunk, which defeats the purpose of grouping chunks by turn. The turn_id should remain constant for all chunks of a single conversational turn and only be incremented when a new turn begins.
In the live streaming flow, the llm_response.turn_complete flag can be used to detect the end of a turn. I recommend managing the turn_id state outside this loop and incrementing it only when llm_response.turn_complete is True.
| @@ -0,0 +1,168 @@ | |||
| # Copyright 2026 Google LLC | |||
There was a problem hiding this comment.
These tests provide good coverage for the run_async flow. However, the turn_id logic is also implemented for the run_live (BIDI streaming) flow in _receive_from_model, but that path is not covered by these tests. Given the different logic for the live flow, adding tests specifically for it would be highly beneficial to ensure turn_id behaves as expected and to catch potential issues like the one identified in the live implementation.
When using StreamingMode.SSE, all partial chunks from the same LLM call now share a stable turn_id (1-based integer counter). This allows consumers to trivially group streaming chunks by turn without fragile heuristics based on event type transitions. The invocation_id groups all events in a single agent invocation, while id changes on every yield. The new turn_id sits in between: it stays constant across all events produced by one LLM call and increments when a new call starts (e.g. after tool execution).
dc971b9 to
4444916
Compare
|
Closing in favour of a new PR with scope limited to SSE flow only (removed BIDI changes). |
Summary
turn_id: intfield toEventthat groups all streaming chunks belonging to the same LLM callturn_idis a 1-based counter that increments with each LLM call insiderun_async, making it easy to identify turn boundaries (turn 1, turn 2, …)Problem
When using
runner.run_async()withStreamingMode.SSE, there is no way to distinguish which partial streaming chunks belong to which LLM response turn. Theinvocation_idis shared across all events in the invocation, andidchanges on every yield. The only workaround is observing transition patterns (partial → function_call → function_response), which is fragile.Solution
A new
turn_id: Optional[int]field onEvent:run_asyncwhile Trueloop)Noneby default, so existing code is unaffectedChanges
src/google/adk/events/event.pyturn_id: Optional[int]field with docstringsrc/google/adk/flows/llm_flows/base_llm_flow.pyrun_asyncloop, passed to_run_one_step_async; counter in BIDI flowtests/unittests/flows/llm_flows/test_turn_id.pyTest plan
test_partial_chunks_share_same_turn_id— partial chunks from one LLM call shareturn_id=1test_turn_id_present_on_final_response— a single final response carriesturn_id=1test_different_llm_calls_get_different_turn_ids— events from separate LLM calls (text → tool → text) getturn_id1 and 2