Conversation
📝 WalkthroughWalkthroughA streaming response handling fix terminates the completion loop upon encountering Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can get early access to new features in CodeRabbit.Enable the |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
plugins/openrouter/example/mimo_example.py (1)
31-38: Consider graceful handling whenOPENROUTER_API_KEYis absent.The direct
os.environ["OPENROUTER_API_KEY"]access will raise a bareKeyErrorif the variable is unset—a stark, unhelpful death. For better user experience, consider validation with a descriptive error:🛡️ Optional: Add explicit validation
async def create_agent(**kwargs) -> Agent: """Create a video assistant powered by Xiaomi MiMo-V2-Omni.""" + api_key = os.environ.get("OPENROUTER_API_KEY") + if not api_key: + raise ValueError("OPENROUTER_API_KEY environment variable is required") llm = openai.ChatCompletionsVLM( model="xiaomi/mimo-v2-omni", base_url="https://openrouter.ai/api/v1", - api_key=os.environ["OPENROUTER_API_KEY"], + api_key=api_key, frame_buffer_seconds=3, frame_width=512, frame_height=384, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@plugins/openrouter/example/mimo_example.py` around lines 31 - 38, The code directly uses os.environ["OPENROUTER_API_KEY"] when constructing openai.ChatCompletionsVLM (assigned to llm), which raises KeyError if the env var is missing; change to validate the API key first (e.g., use os.getenv("OPENROUTER_API_KEY") or os.environ.get and check for None/empty) and raise or log a clear, descriptive error before calling openai.ChatCompletionsVLM so the user sees a helpful message rather than a bare KeyError.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@plugins/openai/vision_agents/plugins/openai/chat_completions/chat_completions_vlm.py`:
- Around line 269-271: The stream loop currently creates
LLMResponseEvent(original=chunk, text=total_text) and immediately uses break,
which prevents draining any subsequent usage-only chunk and so omits
input/output token metadata; instead, stop breaking on first finish_reason —
continue consuming the generator until it naturally ends, preserve the final
chunk (e.g., final_chunk variable or update the last seen chunk) and after the
loop call the existing _extract_usage_tokens(final_chunk) to populate usage
fields (input_tokens/output_tokens) on the LLMResponseEvent before returning;
update logic around LLMResponseEvent creation and the break in
chat_completions_vlm.py to mirror Gemini’s pattern so
stream_options={"include_usage": True} is honored.
---
Nitpick comments:
In `@plugins/openrouter/example/mimo_example.py`:
- Around line 31-38: The code directly uses os.environ["OPENROUTER_API_KEY"]
when constructing openai.ChatCompletionsVLM (assigned to llm), which raises
KeyError if the env var is missing; change to validate the API key first (e.g.,
use os.getenv("OPENROUTER_API_KEY") or os.environ.get and check for None/empty)
and raise or log a clear, descriptive error before calling
openai.ChatCompletionsVLM so the user sees a helpful message rather than a bare
KeyError.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 45c20e41-6100-429e-8362-4eb97560f177
📒 Files selected for processing (2)
plugins/openai/vision_agents/plugins/openai/chat_completions/chat_completions_vlm.pyplugins/openrouter/example/mimo_example.py
Summary by CodeRabbit
Bug Fixes
New Features