Fix: show processing indicator during pre-token latency (turnInFlight)#190
Fix: show processing indicator during pre-token latency (turnInFlight)#190sakshyasinha wants to merge 3 commits into
Conversation
Add turnInFlight state to StreamManager, use combined processing state in StopButton, and set/clear turnInFlight around message send and background streaming to show UI during pre-token latency.
📝 WalkthroughWalkthroughA combined processing state (turnInFlight || isStreaming) was added. StreamStateManager gained turn-in-flight tracking and getIsProcessing(); subscribers now receive the combined boolean. StreamingWorkflow and send-message paths set/clear the turn-in-flight flag and await background completion. StopButton now uses the combined processing state. ChangesProcessing State and Turn-In-Flight Tracking
Sequence DiagramsequenceDiagram
participant User
participant UI as StopButton
participant Send as customSendMessage / customStreamMessage
participant Manager as StreamStateManager
participant Workflow as StreamingWorkflow
participant BG as Background/Runtime
User->>Send: send message
Send->>Manager: setTurnInFlight(true)
Manager->>UI: notify(isProcessing=true)
UI->>UI: render stop button
Send->>Workflow: streamViaBackground() / fetchStreamingData()
Workflow->>Manager: setTurnInFlight(true)
Workflow->>BG: dispatch background message
BG->>BG: process agent request
BG->>Workflow: agent_complete / agent_error
Workflow->>Workflow: resolve completionPromise
Workflow->>Manager: setTurnInFlight(false)
Manager->>UI: notify(isProcessing=false)
UI->>UI: hide stop button
Send->>Manager: setTurnInFlight(false) (finally)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related issues
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/frontend_workspaces/agentic_chat/src/StreamManager.tsx (1)
59-75:⚠️ Potential issue | 🟠 Major | ⚡ Quick winClear local processing state before awaiting the
/stoprequest.On Line 60 through Line 75,
stopStream()waits forapiFetch('/stop')before clearing local flags. If that request is slow/fails, the UI can remain stuck in “processing” even after the user pressed Stop.Proposed fix
async stopStream() { if (this.currentAbortController) { this.currentAbortController.abort(); } + // Clear local UI state immediately. + this.setStreaming(false); + this.setTurnInFlight(false); + this.setAbortController(null); + try { const response = await apiFetch('/stop', { method: "POST", headers: { "Content-Type": "application/json", }, }); if (!response.ok) { console.error("Failed to stop stream on server"); } } catch (error) { console.error("Error stopping stream:", error); } - - this.setStreaming(false); - this.setTurnInFlight(false); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/frontend_workspaces/agentic_chat/src/StreamManager.tsx` around lines 59 - 75, In stopStream(), clear the local UI processing state before waiting for the server call by calling this.setStreaming(false) and this.setTurnInFlight(false) prior to invoking or awaiting apiFetch('/stop'); keep the try/catch around the apiFetch('/stop') call (so errors are still logged) but ensure the UI flags are cleared first (or fire-and-forget the apiFetch) so the UI doesn't remain stuck if the request is slow or fails. Use the existing methods this.setStreaming, this.setTurnInFlight, and apiFetch('/stop') to implement this change.
🧹 Nitpick comments (1)
src/frontend_workspaces/agentic_chat/src/customSendMessage.ts (1)
139-148: ⚡ Quick winUse a single owner for
turnInFlighton the background path.
customStreamMessagetogglesturnInFlight, butstreamViaBackgroundnow does the same. Keeping both creates redundant state transitions and makes lifecycle ownership harder to reason about.Proposed simplification
- try { - streamStateManager.setTurnInFlight(true); - switch (request.input.text) { - default: - await streamViaBackground(instance, request.input.text || ""); - break; - } - } finally { - streamStateManager.setTurnInFlight(false); - } + switch (request.input.text) { + default: + await streamViaBackground(instance, request.input.text || ""); + break; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/frontend_workspaces/agentic_chat/src/customSendMessage.ts` around lines 139 - 148, The code currently sets streamStateManager.setTurnInFlight(true/false) around every send but streamViaBackground already manages turnInFlight, causing duplicate toggles; adjust ownership so only one place controls it. Remove the outer try/finally calls to streamStateManager.setTurnInFlight in customSendMessage.ts around the switch and either (a) let streamViaBackground fully manage turnInFlight, or (b) only set turnInFlight in customStreamMessage for non-background cases; ensure streamViaBackground continues to call streamStateManager.setTurnInFlight(true) at start and setTurnInFlight(false) when finished so there’s a single owner of the turnInFlight lifecycle.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/frontend_workspaces/agentic_chat/src/StreamingWorkflow.ts`:
- Around line 588-592: The completionPromise (with completionResolver) can
remain unresolved when sendMessage returns { type: "error" } or rejects, leaving
the listener attached and processing stuck; update the sendMessage call paths in
StreamingWorkflow (the branches handling sendMessage's success/error and its
.catch) to always call completionResolver() and remove the message listener
before returning or throwing, and for safety wrap the background work in a
try/finally (or ensure every early-return/error branch) so completionResolver is
invoked and the listener detached in all outcomes (references:
completionPromise, completionResolver, sendMessage, and the message listener
registration).
---
Outside diff comments:
In `@src/frontend_workspaces/agentic_chat/src/StreamManager.tsx`:
- Around line 59-75: In stopStream(), clear the local UI processing state before
waiting for the server call by calling this.setStreaming(false) and
this.setTurnInFlight(false) prior to invoking or awaiting apiFetch('/stop');
keep the try/catch around the apiFetch('/stop') call (so errors are still
logged) but ensure the UI flags are cleared first (or fire-and-forget the
apiFetch) so the UI doesn't remain stuck if the request is slow or fails. Use
the existing methods this.setStreaming, this.setTurnInFlight, and
apiFetch('/stop') to implement this change.
---
Nitpick comments:
In `@src/frontend_workspaces/agentic_chat/src/customSendMessage.ts`:
- Around line 139-148: The code currently sets
streamStateManager.setTurnInFlight(true/false) around every send but
streamViaBackground already manages turnInFlight, causing duplicate toggles;
adjust ownership so only one place controls it. Remove the outer try/finally
calls to streamStateManager.setTurnInFlight in customSendMessage.ts around the
switch and either (a) let streamViaBackground fully manage turnInFlight, or (b)
only set turnInFlight in customStreamMessage for non-background cases; ensure
streamViaBackground continues to call streamStateManager.setTurnInFlight(true)
at start and setTurnInFlight(false) when finished so there’s a single owner of
the turnInFlight lifecycle.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e3542f72-031f-4102-abfd-bc26fa52d8f9
📒 Files selected for processing (4)
src/frontend_workspaces/agentic_chat/src/StreamManager.tsxsrc/frontend_workspaces/agentic_chat/src/StreamingWorkflow.tssrc/frontend_workspaces/agentic_chat/src/customSendMessage.tssrc/frontend_workspaces/agentic_chat/src/floating/stop_button.tsx
…r/reject and remove listener
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/frontend_workspaces/agentic_chat/src/StreamingWorkflow.ts (1)
700-719:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winRemove the extra
}in the.catch()block.Line 715 closes the callback one time too many, so this file no longer parses and the
await completionPromise/finallyblock below it becomes invalid.Suggested fix
} catch (e) { // noop } (window as any).chrome.runtime.onMessage.removeListener(listener); - } }); // Wait until the background signals completion via the listener await completionPromise;#!/bin/bash set -e sed -n '700,719p' src/frontend_workspaces/agentic_chat/src/StreamingWorkflow.ts | nl -ba if command -v biome >/dev/null 2>&1; then biome check src/frontend_workspaces/agentic_chat/src/StreamingWorkflow.ts fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/frontend_workspaces/agentic_chat/src/StreamingWorkflow.ts` around lines 700 - 719, The .catch callback for the dispatch promise has an extra closing brace that prematurely ends the function and breaks the subsequent await completionPromise / finally block; open the .catch handler around the error handling that references completionResolver, window.aiSystemInterface and listener (the catch that logs "Failed to dispatch agent_query" and calls window.aiSystemInterface.addStep / setProcessingComplete, completionResolver?.(), and removes the runtime message listener) and remove the stray closing '}' so the .catch block properly closes only once and the surrounding async flow (await completionPromise and the finally block) parses correctly.
♻️ Duplicate comments (1)
src/frontend_workspaces/agentic_chat/src/StreamingWorkflow.ts (1)
669-721:⚠️ Potential issue | 🟠 Major | ⚡ Quick winKeep a final listener teardown in the outer
finally.Line 669 registers a global runtime listener, but Lines 719-721 do not unregister it. If anything throws after registration and before one of the async completion branches runs, that listener leaks into the next turn.
Suggested fix
await completionPromise; } finally { + (window as any).chrome.runtime.onMessage.removeListener(listener); // Ensure we clear the in-flight flag when this function completes. streamStateManager.setTurnInFlight(false); }#!/bin/bash python - <<'PY' from pathlib import Path path = Path("src/frontend_workspaces/agentic_chat/src/StreamingWorkflow.ts") lines = path.read_text().splitlines() for n in [669, 642, 659, 697, 714, 719, 720, 721]: print(f"{n}: {lines[n-1]}") PY🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/frontend_workspaces/agentic_chat/src/StreamingWorkflow.ts` around lines 669 - 721, The runtime message listener registered as (window as any).chrome.runtime.onMessage.addListener(listener) can leak if an exception occurs before the branch that removes it; update the outer finally block that runs after awaiting completionPromise (the finally that currently calls streamStateManager.setTurnInFlight(false)) to also call (window as any).chrome.runtime.onMessage.removeListener(listener) so the listener is always torn down; ensure you reference the same listener variable used in sendMessage's .then/.catch branches and keep the removal idempotent/safe if the listener was already removed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/frontend_workspaces/agentic_chat/src/StreamingWorkflow.ts`:
- Around line 700-719: The .catch callback for the dispatch promise has an extra
closing brace that prematurely ends the function and breaks the subsequent await
completionPromise / finally block; open the .catch handler around the error
handling that references completionResolver, window.aiSystemInterface and
listener (the catch that logs "Failed to dispatch agent_query" and calls
window.aiSystemInterface.addStep / setProcessingComplete,
completionResolver?.(), and removes the runtime message listener) and remove the
stray closing '}' so the .catch block properly closes only once and the
surrounding async flow (await completionPromise and the finally block) parses
correctly.
---
Duplicate comments:
In `@src/frontend_workspaces/agentic_chat/src/StreamingWorkflow.ts`:
- Around line 669-721: The runtime message listener registered as (window as
any).chrome.runtime.onMessage.addListener(listener) can leak if an exception
occurs before the branch that removes it; update the outer finally block that
runs after awaiting completionPromise (the finally that currently calls
streamStateManager.setTurnInFlight(false)) to also call (window as
any).chrome.runtime.onMessage.removeListener(listener) so the listener is always
torn down; ensure you reference the same listener variable used in sendMessage's
.then/.catch branches and keep the removal idempotent/safe if the listener was
already removed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8e99b96c-be7b-4ee2-b02c-46aa51cc65cb
📒 Files selected for processing (1)
src/frontend_workspaces/agentic_chat/src/StreamingWorkflow.ts
|
Can you record a demo showcasing this? |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
scripts/playwright/record_demo.spec.js (1)
18-21: ⚡ Quick winReplace fixed timing waits with state-based waits for a stable demo.
networkidle+ fixed sleeps (1s,12s) can make recordings flaky across machines. Waiting for concrete UI states (input ready, processing indicator shown, first response token shown) will make the demo deterministic and aligned with the PR goal.Also applies to: 35-36
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@scripts/playwright/record_demo.spec.js` around lines 18 - 21, Replace the fixed sleeps after page.goto and later waits (uses of page.waitForTimeout) with state-based waits: use page.waitForSelector or page.waitForFunction to wait for the input to be ready (e.g., await page.waitForSelector('input[your-input-selector]:not([disabled])')), wait for the processing indicator to appear/disappear (e.g., await page.waitForSelector('.processing') / await page.waitForSelector('.processing', { state: 'hidden' })), and wait for the first response token or message to render (e.g., await page.waitForSelector('.response .token:first-child')). Replace the 1s/12s waits around page.goto and later in the script with these selector/function-based waits so the demo is deterministic and stable across machines.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@scripts/playwright/record_demo.spec.js`:
- Around line 11-43: Wrap the main flow (creating context/page, navigation,
typing, waiting) in a try/finally so cleanup always runs: put the existing steps
that use browser, context, page and inputSelector inside try, and in finally
check for and await context.close() and browser.close() (guarding for
undefined/null) to ensure the video is finalized; move or obtain the video path
at a point that is safe (after closing/finalizing the context if needed) and
handle errors from page.video().path() gracefully in the finally block.
---
Nitpick comments:
In `@scripts/playwright/record_demo.spec.js`:
- Around line 18-21: Replace the fixed sleeps after page.goto and later waits
(uses of page.waitForTimeout) with state-based waits: use page.waitForSelector
or page.waitForFunction to wait for the input to be ready (e.g., await
page.waitForSelector('input[your-input-selector]:not([disabled])')), wait for
the processing indicator to appear/disappear (e.g., await
page.waitForSelector('.processing') / await page.waitForSelector('.processing',
{ state: 'hidden' })), and wait for the first response token or message to
render (e.g., await page.waitForSelector('.response .token:first-child')).
Replace the 1s/12s waits around page.goto and later in the script with these
selector/function-based waits so the demo is deterministic and stable across
machines.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8a79bfb5-ff79-437e-9a03-3d39713765cd
📒 Files selected for processing (2)
docs/DEMO_RECORDING.mdscripts/playwright/record_demo.spec.js
✅ Files skipped from review due to trivial changes (1)
- docs/DEMO_RECORDING.md
| const browser = await chromium.launch({ headless: true }); | ||
| const context = await browser.newContext({ | ||
| recordVideo: { dir: outDir, size: { width: 1280, height: 720 } }, | ||
| }); | ||
| const page = await context.newPage(); | ||
|
|
||
| console.log('Opening', url); | ||
| await page.goto(url, { waitUntil: 'networkidle' }); | ||
|
|
||
| // Short waits to let the app bootstrap. Adjust selectors if your app differs. | ||
| await page.waitForTimeout(1000); | ||
|
|
||
| // If your app has a chat input with a known selector, update this. | ||
| const inputSelector = 'textarea, input[type=text], [contenteditable="true"]'; | ||
| await page.waitForSelector(inputSelector, { timeout: 5000 }); | ||
|
|
||
| // Focus the input and type a query that triggers the fake stream | ||
| await page.click(inputSelector); | ||
| await page.keyboard.type('Show processing indicator demo'); | ||
|
|
||
| // Submit — try Enter key; if your UI needs a button, change selector accordingly | ||
| await page.keyboard.press('Enter'); | ||
|
|
||
| // Wait long enough to capture pre-token latency + streaming | ||
| console.log('Recording demo — waiting 12s to capture pre-token phase and streaming'); | ||
| await page.waitForTimeout(12000); | ||
|
|
||
| // Stop and save video | ||
| const video = await page.video().path(); | ||
| console.log('Video saved to:', video); | ||
|
|
||
| await browser.close(); | ||
| console.log('Done. Files in', outDir); |
There was a problem hiding this comment.
Ensure browser/context are always closed, even on failures.
If any step throws (navigation, selector, typing), cleanup is skipped and the recording may not finalize reliably. Wrap the flow in try/finally and close context/browser in finally.
Suggested fix
(async () => {
const url = process.argv[2] || process.env.DEMO_URL || 'http://localhost:3000';
const outDir = 'demo-recordings';
if (!fs.existsSync(outDir)) fs.mkdirSync(outDir, { recursive: true });
const browser = await chromium.launch({ headless: true });
- const context = await browser.newContext({
- recordVideo: { dir: outDir, size: { width: 1280, height: 720 } },
- });
- const page = await context.newPage();
-
- console.log('Opening', url);
- await page.goto(url, { waitUntil: 'networkidle' });
+ const context = await browser.newContext({
+ recordVideo: { dir: outDir, size: { width: 1280, height: 720 } },
+ });
+ try {
+ const page = await context.newPage();
+ console.log('Opening', url);
+ await page.goto(url, { waitUntil: 'networkidle' });
- // Short waits to let the app bootstrap. Adjust selectors if your app differs.
- await page.waitForTimeout(1000);
+ // Short waits to let the app bootstrap. Adjust selectors if your app differs.
+ await page.waitForTimeout(1000);
- // If your app has a chat input with a known selector, update this.
- const inputSelector = 'textarea, input[type=text], [contenteditable="true"]';
- await page.waitForSelector(inputSelector, { timeout: 5000 });
+ // If your app has a chat input with a known selector, update this.
+ const inputSelector = 'textarea, input[type=text], [contenteditable="true"]';
+ await page.waitForSelector(inputSelector, { timeout: 5000 });
- // Focus the input and type a query that triggers the fake stream
- await page.click(inputSelector);
- await page.keyboard.type('Show processing indicator demo');
+ // Focus the input and type a query that triggers the fake stream
+ await page.click(inputSelector);
+ await page.keyboard.type('Show processing indicator demo');
- // Submit — try Enter key; if your UI needs a button, change selector accordingly
- await page.keyboard.press('Enter');
+ // Submit — try Enter key; if your UI needs a button, change selector accordingly
+ await page.keyboard.press('Enter');
- // Wait long enough to capture pre-token latency + streaming
- console.log('Recording demo — waiting 12s to capture pre-token phase and streaming');
- await page.waitForTimeout(12000);
+ // Wait long enough to capture pre-token latency + streaming
+ console.log('Recording demo — waiting 12s to capture pre-token phase and streaming');
+ await page.waitForTimeout(12000);
- // Stop and save video
- const video = await page.video().path();
- console.log('Video saved to:', video);
-
- await browser.close();
- console.log('Done. Files in', outDir);
+ // Stop and save video
+ const video = await page.video().path();
+ console.log('Video saved to:', video);
+ console.log('Done. Files in', outDir);
+ } finally {
+ await context.close();
+ await browser.close();
+ }
})();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const browser = await chromium.launch({ headless: true }); | |
| const context = await browser.newContext({ | |
| recordVideo: { dir: outDir, size: { width: 1280, height: 720 } }, | |
| }); | |
| const page = await context.newPage(); | |
| console.log('Opening', url); | |
| await page.goto(url, { waitUntil: 'networkidle' }); | |
| // Short waits to let the app bootstrap. Adjust selectors if your app differs. | |
| await page.waitForTimeout(1000); | |
| // If your app has a chat input with a known selector, update this. | |
| const inputSelector = 'textarea, input[type=text], [contenteditable="true"]'; | |
| await page.waitForSelector(inputSelector, { timeout: 5000 }); | |
| // Focus the input and type a query that triggers the fake stream | |
| await page.click(inputSelector); | |
| await page.keyboard.type('Show processing indicator demo'); | |
| // Submit — try Enter key; if your UI needs a button, change selector accordingly | |
| await page.keyboard.press('Enter'); | |
| // Wait long enough to capture pre-token latency + streaming | |
| console.log('Recording demo — waiting 12s to capture pre-token phase and streaming'); | |
| await page.waitForTimeout(12000); | |
| // Stop and save video | |
| const video = await page.video().path(); | |
| console.log('Video saved to:', video); | |
| await browser.close(); | |
| console.log('Done. Files in', outDir); | |
| const browser = await chromium.launch({ headless: true }); | |
| const context = await browser.newContext({ | |
| recordVideo: { dir: outDir, size: { width: 1280, height: 720 } }, | |
| }); | |
| try { | |
| const page = await context.newPage(); | |
| console.log('Opening', url); | |
| await page.goto(url, { waitUntil: 'networkidle' }); | |
| // Short waits to let the app bootstrap. Adjust selectors if your app differs. | |
| await page.waitForTimeout(1000); | |
| // If your app has a chat input with a known selector, update this. | |
| const inputSelector = 'textarea, input[type=text], [contenteditable="true"]'; | |
| await page.waitForSelector(inputSelector, { timeout: 5000 }); | |
| // Focus the input and type a query that triggers the fake stream | |
| await page.click(inputSelector); | |
| await page.keyboard.type('Show processing indicator demo'); | |
| // Submit — try Enter key; if your UI needs a button, change selector accordingly | |
| await page.keyboard.press('Enter'); | |
| // Wait long enough to capture pre-token latency + streaming | |
| console.log('Recording demo — waiting 12s to capture pre-token phase and streaming'); | |
| await page.waitForTimeout(12000); | |
| // Stop and save video | |
| const video = await page.video().path(); | |
| console.log('Video saved to:', video); | |
| console.log('Done. Files in', outDir); | |
| } finally { | |
| await context.close(); | |
| await browser.close(); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@scripts/playwright/record_demo.spec.js` around lines 11 - 43, Wrap the main
flow (creating context/page, navigation, typing, waiting) in a try/finally so
cleanup always runs: put the existing steps that use browser, context, page and
inputSelector inside try, and in finally check for and await context.close() and
browser.close() (guarding for undefined/null) to ensure the video is finalized;
move or obtain the video path at a point that is safe (after closing/finalizing
the context if needed) and handle errors from page.video().path() gracefully in
the finally block.
i meant like showing the use case not as code |
|
If no response closing this in 1 days |
|
Closing this pull request to keep the queue tidy: there hasn’t been any follow-up activity here for a while. If you still want these changes merged, feel free to reopen or open a fresh PR against current |
Show a processing indicator as soon as the user submits a message (before first tokens arrive) by introducing a
turnInFlightstate in the frontend stream manager.Changes:
turnInFlightandgetIsProcessing()toStreamManager.tsx.turnInFlightincustomSendMessage.tsandStreamingWorkflow.ts.This reduces pre-token UI blankness and prevents users from retrying while the agent is actually working.
Summary by CodeRabbit
Bug Fixes
UX
Documentation
Tests