fix: cancel suggestion stream on submit to prevent chat blur#565
fix: cancel suggestion stream on submit to prevent chat blur#565ngoiyaeric wants to merge 1 commit intomainfrom
Conversation
…ge is sent, which will prevent the chat from blurring. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
|
WalkthroughThe changes introduce a suggestion tracking mechanism in the chat panel component using Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Suggested labels
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. 📋 Issue PlannerLet us write the prompt for your AI agent so you can ship faster (with fewer bugs). View plan for ticket: ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bun_output.txt`:
- Around line 1-16: The committed file bun_output.txt contains transient local
dev output and machine-specific details and should be removed from source
control; delete bun_output.txt from the repo and add a rule to .gitignore (e.g.,
bun_output.txt or a general logs pattern like *.log or /bun_output.txt) so it
isn’t re-added, then run git rm --cached bun_output.txt (or equivalent) to
remove it from the index and commit the .gitignore change and the deletion;
ensure no other runtime artifacts are present in commits.
In `@components/chat-panel.tsx`:
- Line 46: Replace the current string-based guard with a per-request token:
before calling getSuggestions(...) generate a unique id (e.g., UUID or
incrementing counter) and store it in activeSuggestionRef.current, pass that
token into the streaming handler, and on each incoming chunk compare the chunk's
token to activeSuggestionRef.current and ignore chunks that don't match; clear
the ref only when the specific request finishes/aborts. Update all places that
set/compare activeSuggestionRef (the getSuggestions call site and the stream
chunk handlers referenced around activeSuggestionRef, lines near 98, 140, 156,
and 160-172) so they use this request token instead of the raw query string.
Ensure token lifecycle covers start, completion, and abort so stale streams
can't repopulate suggestions.
In `@patch_suggestions.js`:
- Around line 7-111: The patcher currently performs blind string.replace on
debouncedGetSuggestions, handleSubmit, and handleClear which can silently noop
or duplicate injections; modify the script to first check whether the new
snippet (e.g., presence of "activeSuggestionRef.current" or the exact
newDebounce/newHandleSubmit/newHandleClear marker) already exists and to assert
the old snippet exists before replacing, failing fast (throw/error and stop) if
a match is missing, and only perform the replacement when the old snippet is
present and the new snippet is not, so updates are idempotent for
debouncedGetSuggestions, handleSubmit, and handleClear.
In `@test-grep.sh`:
- Line 1: Add a shebang to the top of test-grep.sh so it runs under a known
shell: prepend a line that uses /usr/bin/env to invoke bash (e.g., use
/usr/bin/env bash) to satisfy ShellCheck SC2148 and ensure the grep command
(grep -rnw -A 10 -B 5 "const handleSubmit = async" components/chat-panel.tsx)
executes with a consistent shell.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 2dbd1439-1a7a-431d-8df2-d5693f137d03
📒 Files selected for processing (4)
bun_output.txtcomponents/chat-panel.tsxpatch_suggestions.jstest-grep.sh
📜 Review details
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-02-10T08:45:03.719Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-02-10T08:45:03.719Z
Learning: The user confirmed that app/auth/page.tsx, lib/auth/v0/auth-service.ts, lib/supabase/browser-client.ts, and middleware.ts all exist and are fully implemented with Supabase SSR authentication. These files are production-ready and should not be replaced or considered legacy/stubs.
Applied to files:
bun_output.txt
🪛 Shellcheck (0.11.0)
test-grep.sh
[error] 1-1: Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.
(SC2148)
| $ next dev --turbo | ||
| ▲ Next.js 15.3.8 (Turbopack) | ||
| - Local: http://localhost:3000 | ||
| - Network: http://192.168.0.2:3000 | ||
| - Environments: .env | ||
|
|
||
| ✓ Starting... | ||
| ✓ Compiled middleware in 386ms | ||
| ✓ Ready in 1880ms | ||
| ○ Compiling / ... | ||
| ✓ Compiled / in 28.6s | ||
| Chat DB actions loaded. Ensure getCurrentUserId() is correctly implemented for server-side usage if applicable. | ||
| GET / 200 in 33121ms | ||
| GET / 200 in 976ms | ||
| [Auth] Supabase URL or Anon Key is not set for server-side auth. | ||
| POST / 200 in 1775ms |
There was a problem hiding this comment.
Remove the local dev log from source control.
This is transient runtime output, not a reproducible test artifact. It adds review noise and exposes machine-specific details like the LAN address and local auth warning without helping validate the chat-panel fix.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bun_output.txt` around lines 1 - 16, The committed file bun_output.txt
contains transient local dev output and machine-specific details and should be
removed from source control; delete bun_output.txt from the repo and add a rule
to .gitignore (e.g., bun_output.txt or a general logs pattern like *.log or
/bun_output.txt) so it isn’t re-added, then run git rm --cached bun_output.txt
(or equivalent) to remove it from the index and commit the .gitignore change and
the deletion; ensure no other runtime artifacts are present in commits.
| const inputRef = useRef<HTMLTextAreaElement>(null) | ||
| const formRef = useRef<HTMLFormElement>(null) | ||
| const fileInputRef = useRef<HTMLInputElement>(null) | ||
| const activeSuggestionRef = useRef<string>('') |
There was a problem hiding this comment.
Use a unique request token instead of the query text.
Clearing activeSuggestionRef.current to '' only works until the same prompt is entered again. If an older getSuggestions("...") stream is still alive when the user re-types that exact text, both requests share the same guard value and late chunks from the stale stream can repopulate suggestions, recreating the blur/overwrite bug. Track a per-request id/token instead of the raw query string.
🛠️ Suggested fix
- const activeSuggestionRef = useRef<string>('')
+ const activeSuggestionRef = useRef<symbol | null>(null)
- activeSuggestionRef.current = ''
+ activeSuggestionRef.current = null
- const currentQuery = value
- activeSuggestionRef.current = currentQuery
+ const requestToken = Symbol('suggestions')
+ activeSuggestionRef.current = requestToken
debounceTimeoutRef.current = setTimeout(async () => {
- if (activeSuggestionRef.current !== currentQuery) return
+ if (activeSuggestionRef.current !== requestToken) return
try {
const suggestionsStream = await getSuggestions(value, mapData)
for await (const partialSuggestions of readStreamableValue(
suggestionsStream
)) {
- if (activeSuggestionRef.current !== currentQuery) break
+ if (activeSuggestionRef.current !== requestToken) break
if (partialSuggestions) {
setSuggestions(partialSuggestions as PartialRelated)
}
}Also applies to: 98-98, 140-140, 156-156, 160-172
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@components/chat-panel.tsx` at line 46, Replace the current string-based guard
with a per-request token: before calling getSuggestions(...) generate a unique
id (e.g., UUID or incrementing counter) and store it in
activeSuggestionRef.current, pass that token into the streaming handler, and on
each incoming chunk compare the chunk's token to activeSuggestionRef.current and
ignore chunks that don't match; clear the ref only when the specific request
finishes/aborts. Update all places that set/compare activeSuggestionRef (the
getSuggestions call site and the stream chunk handlers referenced around
activeSuggestionRef, lines near 98, 140, 156, and 160-172) so they use this
request token instead of the raw query string. Ensure token lifecycle covers
start, completion, and abort so stale streams can't repopulate suggestions.
| content = content.replace( | ||
| 'const fileInputRef = useRef<HTMLInputElement>(null)', | ||
| 'const fileInputRef = useRef<HTMLInputElement>(null)\n const activeSuggestionRef = useRef<string>(\'\')' | ||
| ); | ||
|
|
||
| // We will update debouncedGetSuggestions | ||
| const oldDebounce = ` const debouncedGetSuggestions = useCallback( | ||
| (value: string) => { | ||
| if (debounceTimeoutRef.current) { | ||
| clearTimeout(debounceTimeoutRef.current) | ||
| } | ||
|
|
||
| const wordCount = value.trim().split(/\\s+/).filter(Boolean).length | ||
| if (wordCount < 2) { | ||
| setSuggestions(null) | ||
| return | ||
| } | ||
|
|
||
| debounceTimeoutRef.current = setTimeout(async () => { | ||
| const suggestionsStream = await getSuggestions(value, mapData) | ||
| for await (const partialSuggestions of readStreamableValue( | ||
| suggestionsStream | ||
| )) { | ||
| if (partialSuggestions) { | ||
| setSuggestions(partialSuggestions as PartialRelated) | ||
| } | ||
| } | ||
| }, 500) // 500ms debounce delay | ||
| }, | ||
| [mapData, setSuggestions] | ||
| )`; | ||
|
|
||
| const newDebounce = ` const debouncedGetSuggestions = useCallback( | ||
| (value: string) => { | ||
| if (debounceTimeoutRef.current) { | ||
| clearTimeout(debounceTimeoutRef.current) | ||
| } | ||
|
|
||
| const wordCount = value.trim().split(/\\s+/).filter(Boolean).length | ||
| if (wordCount < 2) { | ||
| setSuggestions(null) | ||
| activeSuggestionRef.current = '' | ||
| return | ||
| } | ||
|
|
||
| const currentQuery = value | ||
| activeSuggestionRef.current = currentQuery | ||
|
|
||
| debounceTimeoutRef.current = setTimeout(async () => { | ||
| if (activeSuggestionRef.current !== currentQuery) return | ||
| try { | ||
| const suggestionsStream = await getSuggestions(value, mapData) | ||
| for await (const partialSuggestions of readStreamableValue( | ||
| suggestionsStream | ||
| )) { | ||
| if (activeSuggestionRef.current !== currentQuery) break | ||
| if (partialSuggestions) { | ||
| setSuggestions(partialSuggestions as PartialRelated) | ||
| } | ||
| } | ||
| } catch (error) { | ||
| console.error(error) | ||
| } | ||
| }, 500) // 500ms debounce delay | ||
| }, | ||
| [mapData, setSuggestions] | ||
| )`; | ||
|
|
||
| content = content.replace(oldDebounce, newDebounce); | ||
|
|
||
| const oldHandleSubmit = ` const handleSubmit = async (e: React.FormEvent<HTMLFormElement>) => { | ||
| e.preventDefault() | ||
| if (!input.trim() && !selectedFile) { | ||
| return | ||
| }`; | ||
|
|
||
| const newHandleSubmit = ` const handleSubmit = async (e: React.FormEvent<HTMLFormElement>) => { | ||
| e.preventDefault() | ||
| if (!input.trim() && !selectedFile) { | ||
| return | ||
| } | ||
|
|
||
| if (debounceTimeoutRef.current) { | ||
| clearTimeout(debounceTimeoutRef.current) | ||
| } | ||
| activeSuggestionRef.current = '' | ||
| setSuggestions(null)`; | ||
|
|
||
| content = content.replace(oldHandleSubmit, newHandleSubmit); | ||
|
|
||
| const oldHandleClear = ` const handleClear = async () => { | ||
| setMessages([]) | ||
| setSuggestions(null)`; | ||
|
|
||
| const newHandleClear = ` const handleClear = async () => { | ||
| if (debounceTimeoutRef.current) { | ||
| clearTimeout(debounceTimeoutRef.current) | ||
| } | ||
| activeSuggestionRef.current = '' | ||
| setMessages([]) | ||
| setSuggestions(null)`; | ||
|
|
||
| content = content.replace(oldHandleClear, newHandleClear); | ||
|
|
||
| fs.writeFileSync(path, content); |
There was a problem hiding this comment.
Make the patcher fail fast and idempotent.
These replacements are exact-string rewrites, but none of them verify that a match occurred. A formatting drift turns this into a silent partial/no-op, and several replacements rewrite A to A + extra, so rerunning the script will duplicate the injected lines while still printing success.
🛠️ Suggested fix
+function replaceOrThrow(source, from, to, label) {
+ if (source.includes(to)) return source; // already patched
+ if (!source.includes(from)) {
+ throw new Error(`Could not find ${label} in ${path}`);
+ }
+ return source.replace(from, to);
+}
+
-content = content.replace(
+content = replaceOrThrow(
+ content,
'const fileInputRef = useRef<HTMLInputElement>(null)',
- 'const fileInputRef = useRef<HTMLInputElement>(null)\n const activeSuggestionRef = useRef<string>(\'\')'
-);
+ 'const fileInputRef = useRef<HTMLInputElement>(null)\n const activeSuggestionRef = useRef<string>(\'\')',
+ 'activeSuggestionRef declaration'
+);Apply the same helper to the debounce, submit, and clear replacements.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@patch_suggestions.js` around lines 7 - 111, The patcher currently performs
blind string.replace on debouncedGetSuggestions, handleSubmit, and handleClear
which can silently noop or duplicate injections; modify the script to first
check whether the new snippet (e.g., presence of "activeSuggestionRef.current"
or the exact newDebounce/newHandleSubmit/newHandleClear marker) already exists
and to assert the old snippet exists before replacing, failing fast (throw/error
and stop) if a match is missing, and only perform the replacement when the old
snippet is present and the new snippet is not, so updates are idempotent for
debouncedGetSuggestions, handleSubmit, and handleClear.
| @@ -0,0 +1 @@ | |||
| grep -rnw -A 10 -B 5 "const handleSubmit = async" components/chat-panel.tsx | |||
There was a problem hiding this comment.
Add a shebang so this runs under a known shell.
*.sh without an interpreter line triggers ShellCheck SC2148 and makes direct execution depend on the caller’s default shell. Add #!/usr/bin/env bash at the top.
🛠️ Suggested fix
+#!/usr/bin/env bash
grep -rnw -A 10 -B 5 "const handleSubmit = async" components/chat-panel.tsx📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| grep -rnw -A 10 -B 5 "const handleSubmit = async" components/chat-panel.tsx | |
| #!/usr/bin/env bash | |
| grep -rnw -A 10 -B 5 "const handleSubmit = async" components/chat-panel.tsx |
🧰 Tools
🪛 Shellcheck (0.11.0)
[error] 1-1: Tips depend on target shell and yours is unknown. Add a shebang or a 'shell' directive.
(SC2148)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test-grep.sh` at line 1, Add a shebang to the top of test-grep.sh so it runs
under a known shell: prepend a line that uses /usr/bin/env to invoke bash (e.g.,
use /usr/bin/env bash) to satisfy ShellCheck SC2148 and ensure the grep command
(grep -rnw -A 10 -B 5 "const handleSubmit = async" components/chat-panel.tsx)
executes with a consistent shell.
Closes #549.
This commit resolves a UI bug where example query suggestions would blur the chat interface even after the user had already submitted a query and a response had begun generating.
Root Cause:
The
debouncedGetSuggestionsfunction incomponents/chat-panel.tsxreads from an asynchronous server stream (readStreamableValue). If a user submits their message while this stream is still active, the UI successfully generates a response but is later overwritten by incoming delayed stream chunks which update thesuggestionsstate, blurring the chat.Fix:
activeSuggestionRefto track the latest query actively fetching suggestions.debouncedGetSuggestionsto break out of the asyncfor awaitstream loop if theactiveSuggestionRefchanges or is cleared.activeSuggestionRefand invokesetSuggestions(null)insidehandleSubmitandhandleClearto instantly dismiss any pending suggestions and prevent further updates from ongoing streams.PR created automatically by Jules for task 5909036928276179497 started by @ngoiyaeric
Summary by CodeRabbit