Skip to content

fix(studio): use a dedicated error role for Playground failures#58

Draft
k-taro56 wants to merge 1 commit intomainfrom
eng-516
Draft

fix(studio): use a dedicated error role for Playground failures#58
k-taro56 wants to merge 1 commit intomainfrom
eng-516

Conversation

@k-taro56
Copy link
Copy Markdown
Contributor

Summary

  • Inference errors in the Playground used to reuse role: "assistant" for the error bubble while leaving the empty placeholder in place, so every failed send rendered as two ASSISTANT labels (one empty + one [error] …). Repeated failures made the chat look like ASSISTANT spam.
  • Add a dedicated "error" role to Message. The catch path now (a) replaces the empty assistant placeholder when present, (b) otherwise appends the error so any partial streaming output from before the failure is preserved.
  • Filter "error" messages out of the history forwarded to /api/inference/chat — cloud-api's ChatRequestBody only accepts system | user | assistant, and previous error bubbles shouldn't poison the next prompt's context.
  • Wire track("studio_sse_error", { source: "inference" }) on the failure path so Playground inference failures land in the existing SPA telemetry channel.

Test plan

  • pnpm --filter @arkor/studio-app exec tsc --noEmit
  • pnpm --filter @arkor/studio-app test (46 passed)
  • Manual: force an inference error in the Playground (e.g. stop cloud-api or pick an invalid adapter), confirm exactly one ERROR bubble per failed send, and that the next successful send does not include the prior error in the request body.

Enhanced the message interface to include an "error" role. Updated the send function to filter out error messages from the history before sending. Improved error handling to replace the last assistant message with an error message if applicable, ensuring better user feedback during inference errors.
Copilot AI review requested due to automatic review settings April 29, 2026 10:12
@k-taro56 k-taro56 self-assigned this Apr 29, 2026
@soleil-colza soleil-colza self-requested a review April 29, 2026 10:13
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a8b91907ed

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

}
return [...prev, errorMsg];
});
track("studio_sse_error", { source: "inference" });
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Import analytics tracker before calling track

The new call to track("studio_sse_error", …) references an undeclared identifier, so @arkor/studio-app no longer type-checks under strict TypeScript (TS2304: Cannot find name 'track'). This breaks CI/build flows that run tsc --noEmit and should be fixed by importing the intended tracker (or removing the call).

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the Studio Playground chat UI to represent inference failures as dedicated "error" messages, avoiding duplicate/empty assistant bubbles and preventing prior error bubbles from being sent back to /api/inference/chat.

Changes:

  • Add an "error" role to the local Message type and emit error bubbles on inference failures.
  • On failure, replace the empty assistant placeholder when present; otherwise append an error message to preserve any partial streamed output.
  • Exclude "error" messages from the request history sent to /api/inference/chat, and attempt to emit a telemetry event on failure.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

}
return [...prev, errorMsg];
});
track("studio_sse_error", { source: "inference" });
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

track(...) is referenced but not imported/defined anywhere in the Studio SPA, so this will fail TypeScript compilation (and would also throw at runtime if it slipped through). Import the correct telemetry helper (or use the existing telemetry API if one exists) and consider making the call resilient if telemetry is optional.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@soleil-colza soleil-colza left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTMeow

@k-taro56 k-taro56 marked this pull request as draft April 29, 2026 18:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants