fix(a2a-demo): adapt joint graph to ADK 1.33 RemoteA2aAgent sub-session telemetry#147
Merged
haiyuan-eng-google merged 2 commits intoMay 12, 2026
Conversation
…on telemetry
ADK 1.33 changed RemoteA2aAgent to spawn its own caller-side
InvocationContext with a fresh session_id, so the A2A_INTERACTION
row no longer lands under the supervisor session. The demo's G1
gate and the auditor projection both joined on
``ev.session_id = caller_session_id`` and collapsed to zero rows.
Caller side (``run_caller_agent.py``):
* Materialize ``supervisor_a2a_invocations`` after caller flush,
pairing each supervisor's TOOL_STARTING for
``audience_risk_reviewer`` with the chronologically-next
A2A_INTERACTION on the audience_risk_reviewer sub-session via
ROW_NUMBER() ranking inside the current run's (user_id, session
set, min-timestamp) window. Deterministic because campaigns run
sequentially.
* G1: count A2A_INTERACTION rows at dataset level filtered by
``agent='audience_risk_reviewer'`` and require ≥ N campaigns.
* G1.5 (new): require mapping count == campaign count and every
row to have a non-NULL a2a_context_id.
* G3: read a2a_context_id from the mapping table rather than from
the supervisor session's agent_events.
Auditor side (``build_joint_graph.py``):
* Rewrite ``remote_agent_invocations`` to read from
``<CALLER_DATASET>.supervisor_a2a_invocations`` and join through
``caller_campaign_runs`` to keep current-run scoping. The
``caller_session_id`` column now points back at the supervisor
session, so the property graph's
``CallerCampaignRun -[:DelegatedVia]-> RemoteAgentInvocation``
edge remains valid without touching the joint graph DDL.
* Receiver-side stitch (a2a_context_id == receiver.session_id) is
unchanged.
Docs (``A2A_JOINT_LINEAGE.md``, ``README.md``):
* New "ADK 1.33 sub-session shape" section documenting the mapping
schema and chronological-rank pairing.
* Auditor projection table updated to show the mapping source.
* Failure-modes table gains a G1.5 row.
Verified live on test-project-0728-467323 with google-adk 1.33.0:
G1/G1.5/G2/G3 all green; ``stitch_coverage`` reports
a2a_calls=3, stitched_edges=3 (was 0 before).
7ddd829 to
8894a31
Compare
haiyuan-eng-google
approved these changes
May 12, 2026
6 tasks
caohy1988
added a commit
to caohy1988/BQAA-SDK-fork
that referenced
this pull request
May 12, 2026
…E rewrite PR GoogleCloudPlatform#147 landed the supervisor↔A2A-sub-session mapping table and the receiver-extraction fallback parser, but the two narrative docs still described the pre-GoogleCloudPlatform#147 state. This commit aligns them. DATA_LINEAGE.md: * Layer 1 (agent_events): add the ADK 1.33 caller-side telemetry note. The A2A_INTERACTION row lives in a sibling caller session, not the supervisor's. * Layer 2 (Demo metadata): add the supervisor_a2a_invocations mapping table row with full schema, the per-tool-call time-window pairing rule, and the G1.5 gate it backstops. * Layer 3 (SDK extraction outputs): add the receiver-side dual-path writer section — primary AI.GENERATE path now uses SQL-style output_schema with deterministic model_params, and the demo fallback parser in build_org_graphs._repair_receiver_extraction_ from_prompt_contract re-stores via store_decision_points (WRITE_TRUNCATE; cited as a sharp edge). * Layer 4 (Auditor projections): fix the remote_agent_invocations source projection SQL — it no longer reads from <CALLER>.agent_events. The SQL in the table now matches build_joint_graph.py byte-for-byte: reads from <CALLER>.supervisor_a2a_invocations and joins through caller_campaign_runs. DEMO_NARRATION.md: * Beat 1: keep the 5-min flow; add a single optional follow-up paragraph for technical Q&A that names the ADK 1.33 split-session shape and the bridge table. * Beat 2: add a single follow-up paragraph for the AI.GENERATE typed output_schema + fallback parser story. * New "Presenter aside — robustness" section between Close and Questions: names both design choices (ADK 1.33 bridge + receiver fallback) as deliberate resilience to runtime/model shifts. * Questions To Invite: add a question about ADK/model shape changes and refresh the closing answer. The 5-minute talk track itself is unchanged. The additions are all presenter-aside / Q&A material, gated by the audience asking.
haiyuan-eng-google
pushed a commit
that referenced
this pull request
May 12, 2026
…E rewrite (#149) * docs(a2a-demo): refresh narration + lineage for ADK 1.33 + AI.GENERATE rewrite PR #147 landed the supervisor↔A2A-sub-session mapping table and the receiver-extraction fallback parser, but the two narrative docs still described the pre-#147 state. This commit aligns them. DATA_LINEAGE.md: * Layer 1 (agent_events): add the ADK 1.33 caller-side telemetry note. The A2A_INTERACTION row lives in a sibling caller session, not the supervisor's. * Layer 2 (Demo metadata): add the supervisor_a2a_invocations mapping table row with full schema, the per-tool-call time-window pairing rule, and the G1.5 gate it backstops. * Layer 3 (SDK extraction outputs): add the receiver-side dual-path writer section — primary AI.GENERATE path now uses SQL-style output_schema with deterministic model_params, and the demo fallback parser in build_org_graphs._repair_receiver_extraction_ from_prompt_contract re-stores via store_decision_points (WRITE_TRUNCATE; cited as a sharp edge). * Layer 4 (Auditor projections): fix the remote_agent_invocations source projection SQL — it no longer reads from <CALLER>.agent_events. The SQL in the table now matches build_joint_graph.py byte-for-byte: reads from <CALLER>.supervisor_a2a_invocations and joins through caller_campaign_runs. DEMO_NARRATION.md: * Beat 1: keep the 5-min flow; add a single optional follow-up paragraph for technical Q&A that names the ADK 1.33 split-session shape and the bridge table. * Beat 2: add a single follow-up paragraph for the AI.GENERATE typed output_schema + fallback parser story. * New "Presenter aside — robustness" section between Close and Questions: names both design choices (ADK 1.33 bridge + receiver fallback) as deliberate resilience to runtime/model shifts. * Questions To Invite: add a question about ADK/model shape changes and refresh the closing answer. The 5-minute talk track itself is unchanged. The additions are all presenter-aside / Q&A material, gated by the audience asking. * docs(a2a-demo): correct receiver fallback write contract
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Live-verified on
test-project-0728-467323withgoogle-adk==1.33.0.This PR fixes the ADK 1.33 RemoteA2aAgent telemetry shape and removes the receiver-extraction flake that was blocking the A2A demo from running end-to-end.
What changed
ADK 1.33 caller-side sub-session mapping
ADK 1.33 writes
RemoteA2aAgenttelemetry under a fresh caller-side sub-session (agent='audience_risk_reviewer'), not under the parent supervisor session. The demo now materializescaller.supervisor_a2a_invocationsto bridge that split:TOOL_STARTINGrows foraudience_risk_reviewer.A2A_INTERACTIONrows from the RemoteA2aAgent sub-sessions.supervisor_ts <= a2a_ts < next_supervisor_ts) for the sameuser_id.Gates now match that shape:
A2A_INTERACTIONrows foragent='audience_risk_reviewer'.a2a_context_id.a2a_context_idfrom the mapping table.build_joint_graph.remote_agent_invocationsreads from the mapping. The graph DDL is unchanged:caller_session_idstill points at the supervisor session, whilea2a_context_idstill stitches to the receiver session.Receiver extraction reliability
The SDK decision extraction query now follows current BigQuery
AI.GENERATEdocs: it uses SQL-styleoutput_schemafor the nested decision/candidate contract and convertsAI.GENERATE(...).decisionsback to JSON withTO_JSON_STRING, preserving the existing Python parser contract.build_org_graphs.pyalso has a demo-specific safety net: if AI.GENERATE returns too few receiver decisions/candidates, it deterministically parses the receiver agent's strict prompt-shapedLLM_RESPONSEformat and rewritesdecision_points/candidates, then recreates decision edges and reruns the gate.Live verification
decision_points=1,candidates=3; fallback repaired it todecision_points=4,candidates=12, then the acceptance gate passed.build_joint_graph.pyafter repair:a2a_calls=3,stitched_edges=3decision_points=3,candidates=9run_analyst_agent.pycompleted all four canned questions and produced user-visible answers for graph health, campaign list, one campaign audit path, and portfolio dropped-option review.Static verification
pytest tests/test_context_graph.py -q→ 98 passedpyink --checkon changed Python → cleanisort --check-onlyon changed Python → cleanpy_compileon changed Python → cleanbash -non demo shell scripts → cleangit diff --check→ cleanReference
BigQuery
AI.GENERATEsupports structured output using SQL-styleoutput_schemaand returns aSTRUCT; when output schema is specified,resultis replaced by the custom schema fields. That is why the new query reads.decisionsand wraps it withTO_JSON_STRINGfor the existing parser.Docs: https://docs.cloud.google.com/bigquery/docs/reference/standard-sql/bigqueryml-syntax-ai-generate