feat: add MiniMax provider support via direct API#100
feat: add MiniMax provider support via direct API#100octo-patch wants to merge 2 commits intohuggingface:mainfrom
Conversation
- Add minimax/ prefix routing in _resolve_llm_params: resolves to the MiniMax OpenAI-compatible API at https://api.minimax.io/v1 - Add MiniMax-M2.7 and MiniMax-M2.7-highspeed to SUGGESTED_MODELS alongside the existing HF-router entry (MiniMaxAI/MiniMax-M2.7) - Add minimax/ entries to AVAILABLE_MODELS in the backend routes - Document MINIMAX_API_KEY env var in README - Add 12 unit tests covering provider routing, env var handling, base URL override, and reasoning-effort passthrough behaviour
|
@claude please review this PR. Focus on correctness, security, LiteLLM/Bedrock routing impact, and whether it's safe to merge against current main. Keep it concise and prefer inline comments where it matters. |
|
Claude finished @akseljoonas's task in 1m 52s —— View job PR Review: feat: add MiniMax provider support via direct APITasks
SummaryImplementation is correct and well-structured. No existing routing (LiteLLM, Bedrock, HF router) is affected. One functional bug, one label inconsistency, and a minor test fragility worth fixing before merge. Bug —
|
| prefix | path | unchanged? |
|---|---|---|
bedrock/ |
LiteLLM Converse adapter | ✅ |
anthropic/ |
Native thinking config | ✅ |
openai/ |
OpenAI adapter | ✅ |
minimax/ |
New branch → OpenAI adapter + custom api_base |
new |
| anything else | HF router fallthrough | ✅ |
The openai/{minimax_model} forwarding pattern is exactly right — LiteLLM uses the OpenAI adapter with our overridden api_base, which is the correct approach for any OpenAI-compatible endpoint.
Verdict
Safe to merge with one fix: the missing minimax/ in _print_hf_routing_info. The rest are quality improvements. Core routing, Bedrock, and HF-router paths are unaffected.
Summary
This PR adds direct MiniMax API support alongside the existing HF-router path.
What changes:
agent/core/llm_params.py: addsminimax/<model>prefix routing that resolves to the MiniMax OpenAI-compatible endpoint (https://api.minimax.io/v1) usingMINIMAX_API_KEY. Reasoning effort is silently ignored (MiniMax doesn't support thinking modes). The endpoint can be overridden viaMINIMAX_BASE_URL(useful for the Chinese mirrorapi.minimaxi.com).agent/core/model_switcher.py: addsminimax/MiniMax-M2.7andminimax/MiniMax-M2.7-highspeedtoSUGGESTED_MODELSand updates the/modelhelp text to mention the new prefix.backend/routes/agent.py: adds the two direct-API model entries toAVAILABLE_MODELS.README.md: documentsMINIMAX_API_KEYand the--model minimax/MiniMax-M2.7usage example.tests/unit/test_minimax_provider.py: 12 unit tests covering routing, env var handling, base URL override, and effort passthrough.Why: The existing
MiniMaxAI/MiniMax-M2.7entry routes through the HF router. A directminimax/prefix lets users with aMINIMAX_API_KEYcallapi.minimax.iowithout HF intermediation — same pattern asanthropic/andopenai/prefixes.API reference
Test plan
pytest tests/unit/test_minimax_provider.py -v)MiniMax-M2.7responds correctly atapi.minimax.io/v1