Skip to content

feat: add MiniMax as first-class LLM provider#407

Open
octo-patch wants to merge 1 commit intoianarawjo:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#407
octo-patch wants to merge 1 commit intoianarawjo:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax as a first-class LLM provider in ChainForge, enabling users to query MiniMax M2.7 and M2.7-highspeed models directly from the visual prompt-testing UI.

Changes

  • Provider registration (models.ts): NativeLLM enum entries, LLMProvider.MiniMax, getProvider() detection, rate limit (1000 RPM)
  • API integration (utils.ts): call_minimax() function using OpenAI-compatible API at https://api.minimax.io/v1, temperature clamping (min 0.01) per MiniMax requirement, response extraction via existing OpenAI handler
  • Bugfix (utils.ts): call_chatgpt() now respects custom API_KEY parameter - previously it threw no OpenAI key even when a custom key was passed by providers like DeepSeek/MiniMax
  • Settings schema (ModelSettingSchemas.tsx): Full model settings form with temperature (0.01-1.0, default 0.7), system message, top_p, max_tokens, stop sequences, presence/frequency penalty
  • UI menu (store.tsx): MiniMax group with M2.7 and M2.7-highspeed entries
  • Backend env mapping (flask_app.py): MINIMAX_API_KEY environment variable support
  • Tests (minimax.test.ts): 25 tests - 21 unit tests (enum values, provider detection, schema validation, response extraction, form data processing) + 3 integration tests (live API call, temperature clamping, system message) + 1 conditional test

How it works

MiniMax follows the same pattern as DeepSeek: an OpenAI-compatible wrapper that delegates to call_chatgpt() with a custom base URL and API key. Users just need to set their MINIMAX_API_KEY in Settings.

Test plan

  • All 21 unit tests pass
  • All 3 integration tests pass (with MINIMAX_API_KEY set)
  • No regressions in existing test suite (pre-existing failures remain unchanged)
  • Manual verification: add MiniMax model from UI dropdown, query with prompt, verify response

Add MiniMax M2.7 and M2.7-highspeed models as a native LLM provider,
following the DeepSeek/OpenAI-compatible pattern. Includes:

- NativeLLM enum entries for MiniMax-M2.7 and MiniMax-M2.7-highspeed
- LLMProvider.MiniMax with provider detection and rate limiting
- call_minimax() via OpenAI-compat API (https://api.minimax.io/v1)
- Temperature clamping (min 0.01) per MiniMax API requirement
- Settings schema with model selector, temperature, system_msg, top_p,
  max_tokens, stop, presence/frequency_penalty
- UI menu group with both models
- Flask env var mapping for MINIMAX_API_KEY
- Fix: call_chatgpt now respects custom API_KEY param (skips
  OPENAI_API_KEY check when a custom key is provided)
- 25 tests (21 unit + 3 integration + 1 conditional)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant