fix(model): auto-generate ChatResponse.id when LLM doesn't provide it#721
Conversation
Summary of ChangesHello @JGoP-L, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves an issue where the Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request effectively addresses a NullPointerException that occurs when an LLM provider, such as Ollama, does not return an ID in its response. The solution of auto-generating a UUID in the ChatResponse.Builder is a clean and centralized approach that improves the robustness of the system for all model integrations. The accompanying test update correctly verifies this new behavior. I have one minor suggestion to further strengthen the ID validation.
| if (responseId == null || responseId.isEmpty()) { | ||
| responseId = UUID.randomUUID().toString(); | ||
| } |
There was a problem hiding this comment.
The current check for id allows for strings containing only whitespace to be considered valid. This could lead to issues if downstream components expect a non-blank identifier. Using trim().isEmpty() would be more robust by ensuring that whitespace-only strings are also treated as invalid and trigger the UUID generation.
| if (responseId == null || responseId.isEmpty()) { | |
| responseId = UUID.randomUUID().toString(); | |
| } | |
| if (responseId == null || responseId.trim().isEmpty()) { | |
| responseId = UUID.randomUUID().toString(); | |
| } |
There was a problem hiding this comment.
Pull request overview
This PR fixes issue #708 where the AGUI protocol fails when using OllamaChatModel because Ollama's API doesn't return an id field in chat completion responses, causing NullPointerException in AGUI event constructors that require non-null message IDs.
Changes:
- Modified
ChatResponse.Builder.build()to auto-generate a UUID when theidfield is null or empty - Updated
AnthropicResponseParserTestto verify auto-generated IDs instead of expecting null
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| agentscope-core/src/main/java/io/agentscope/core/model/ChatResponse.java | Added UUID auto-generation logic in the Builder's build() method to ensure all ChatResponse instances have valid IDs |
| agentscope-core/src/test/java/io/agentscope/core/formatter/anthropic/AnthropicResponseParserTest.java | Updated test to verify that IDs are auto-generated when not provided by the parser |
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
…agentscope-ai#708) Ollama API doesn't return an id field in its chat completion responses, unlike other LLM providers (OpenAI, Anthropic, DashScope). This causes the AGUI protocol and other components that depend on ChatResponse.getId() to fail with "messageId cannot be null". Solution: - Modified ChatResponse.Builder.build() to auto-generate a UUID when id is null, empty, or blank (whitespace-only) - Updated AnthropicResponseParserTest to verify auto-generated IDs Per review feedback: - Copilot: Added isEmpty() check for empty strings - gemini-code-assist: Added trim().isEmpty() for blank strings This ensures compatibility across all LLM providers while maintaining backward compatibility with providers that return an id. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
7d93f73 to
9a23bcb
Compare
AgentScope-Java Version
1.0.9-SNAPSHOT
Description
This PR fixes Issue #708 where OllamaChatModel fails with the AGUI protocol
due to missing id field in Ollama API responses.
Problem
Ollama API doesn't return an id field in its chat completion responses,
unlike other LLM providers (OpenAI, Anthropic, DashScope). This causes the
AGUI protocol and other components that depend on ChatResponse.getId() to
fail with NullPointerException: "messageId cannot be null".
Root Cause Chain:
Ollama API → no id field
↓
OllamaResponseParser → ChatResponse.id = null
↓
Msg.id = null
↓
Event.getMessage().getId() = null
↓
AguiEvent.TextMessageStart → NullPointerException
Solution
Modified ChatResponse.Builder.build() to automatically generate a UUID when
id is null or empty. This provides a unified solution that:
automatically
original value
state
Changes
Modified Files:
is null or empty
auto-generated IDs
Code Change:
public ChatResponse build() {
// Auto-generate id if not set or empty (for providers like Ollama)
String responseId = this.id;
if (responseId == null || responseId.isEmpty()) {
responseId = UUID.randomUUID().toString();
}
return new ChatResponse(responseId, content, usage, metadata,
finishReason);
}
Testing
Manual Testing:
Automated Testing:
Checklist