Skip to content

add mimo example with openrouter#432

Merged
maxkahan merged 1 commit intomainfrom
add-mimo-example
Mar 20, 2026
Merged

add mimo example with openrouter#432
maxkahan merged 1 commit intomainfrom
add-mimo-example

Conversation

@maxkahan
Copy link
Contributor

@maxkahan maxkahan commented Mar 20, 2026

Summary by CodeRabbit

  • Bug Fixes

    • Fixed streaming response handling to properly terminate after completion signal instead of continuing to consume additional chunks.
  • New Features

    • Added example implementation for MiMo Video Assistant using the MiMo-V2-Omni model through OpenRouter, including speech-to-text and text-to-speech capabilities.

@coderabbitai
Copy link

coderabbitai bot commented Mar 20, 2026

📝 Walkthrough

Walkthrough

A streaming response handling fix terminates the completion loop upon encountering finish_reason, preventing further chunk consumption. Additionally, a new example script demonstrates configuring a MiMo-V2-Omni video assistant via OpenRouter with speech and vision capabilities.

Changes

Cohort / File(s) Summary
Stream Termination Fix
plugins/openai/.../chat_completions_vlm.py
Modified streaming loop to assign final llm_response and break immediately after finish_reason is encountered, preventing additional chunk iteration before response return.
MiMo Video Assistant Example
plugins/openrouter/example/mimo_example.py
New example script configuring an asynchronous video assistant using Xiaomi's MiMo-V2-Omni model through OpenRouter, including agent creation with GetStream connectivity and speech capabilities.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Poem

A stream once bled without a name,
chunks cascading past the end—
now silence comes when finish calls,
the loop knows when to break,
and MiMo sees, at last, the light.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main addition in the changeset: a new MiMo example using OpenRouter, which is the primary focus of the PR.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch add-mimo-example
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can get early access to new features in CodeRabbit.

Enable the early_access setting to enable early access features such as new models, tools, and more.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
plugins/openrouter/example/mimo_example.py (1)

31-38: Consider graceful handling when OPENROUTER_API_KEY is absent.

The direct os.environ["OPENROUTER_API_KEY"] access will raise a bare KeyError if the variable is unset—a stark, unhelpful death. For better user experience, consider validation with a descriptive error:

🛡️ Optional: Add explicit validation
 async def create_agent(**kwargs) -> Agent:
     """Create a video assistant powered by Xiaomi MiMo-V2-Omni."""
+    api_key = os.environ.get("OPENROUTER_API_KEY")
+    if not api_key:
+        raise ValueError("OPENROUTER_API_KEY environment variable is required")
     llm = openai.ChatCompletionsVLM(
         model="xiaomi/mimo-v2-omni",
         base_url="https://openrouter.ai/api/v1",
-        api_key=os.environ["OPENROUTER_API_KEY"],
+        api_key=api_key,
         frame_buffer_seconds=3,
         frame_width=512,
         frame_height=384,
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@plugins/openrouter/example/mimo_example.py` around lines 31 - 38, The code
directly uses os.environ["OPENROUTER_API_KEY"] when constructing
openai.ChatCompletionsVLM (assigned to llm), which raises KeyError if the env
var is missing; change to validate the API key first (e.g., use
os.getenv("OPENROUTER_API_KEY") or os.environ.get and check for None/empty) and
raise or log a clear, descriptive error before calling openai.ChatCompletionsVLM
so the user sees a helpful message rather than a bare KeyError.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@plugins/openai/vision_agents/plugins/openai/chat_completions/chat_completions_vlm.py`:
- Around line 269-271: The stream loop currently creates
LLMResponseEvent(original=chunk, text=total_text) and immediately uses break,
which prevents draining any subsequent usage-only chunk and so omits
input/output token metadata; instead, stop breaking on first finish_reason —
continue consuming the generator until it naturally ends, preserve the final
chunk (e.g., final_chunk variable or update the last seen chunk) and after the
loop call the existing _extract_usage_tokens(final_chunk) to populate usage
fields (input_tokens/output_tokens) on the LLMResponseEvent before returning;
update logic around LLMResponseEvent creation and the break in
chat_completions_vlm.py to mirror Gemini’s pattern so
stream_options={"include_usage": True} is honored.

---

Nitpick comments:
In `@plugins/openrouter/example/mimo_example.py`:
- Around line 31-38: The code directly uses os.environ["OPENROUTER_API_KEY"]
when constructing openai.ChatCompletionsVLM (assigned to llm), which raises
KeyError if the env var is missing; change to validate the API key first (e.g.,
use os.getenv("OPENROUTER_API_KEY") or os.environ.get and check for None/empty)
and raise or log a clear, descriptive error before calling
openai.ChatCompletionsVLM so the user sees a helpful message rather than a bare
KeyError.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 45c20e41-6100-429e-8362-4eb97560f177

📥 Commits

Reviewing files that changed from the base of the PR and between b775230 and d3219bb.

📒 Files selected for processing (2)
  • plugins/openai/vision_agents/plugins/openai/chat_completions/chat_completions_vlm.py
  • plugins/openrouter/example/mimo_example.py

@maxkahan maxkahan merged commit ab21339 into main Mar 20, 2026
6 checks passed
@maxkahan maxkahan deleted the add-mimo-example branch March 20, 2026 12:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants