Skip to content

feat: add MiniMax as LLM provider for classification/summarization#137

Open
octo-patch wants to merge 1 commit intoverygoodplugins:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM provider for classification/summarization#137
octo-patch wants to merge 1 commit intoverygoodplugins:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Adds MiniMax as a first-class LLM provider for memory type classification and auto-summarization, alongside the existing OpenAI support.

What changed

  • LLM_PROVIDER env var (auto/openai/minimax) for explicit provider selection
  • Auto-detection: when OPENAI_API_KEY is not set, falls back to MINIMAX_API_KEY automatically
  • Temperature clamping to (0, 1] for MiniMax models (API requirement)
  • think tag stripping for MiniMax M2.5+ reasoning traces in classification/summarization responses
  • MiniMax-M2.7 (204K context) as default model when using MiniMax
  • Documentation updates: .env.example, README.md, INSTALLATION.md, CLAUDE.md, docs/ENVIRONMENT_VARIABLES.md

Why MiniMax?

MiniMax M2.7 offers a 204K context window at competitive pricing (~$0.14/1M input tokens), making it a cost-effective alternative for memory classification and summarization tasks. Its OpenAI-compatible API means zero new dependencies - it reuses the existing openai Python SDK with a different base_url.

Quick start

# Option 1: Explicit provider
LLM_PROVIDER=minimax
MINIMAX_API_KEY=your-key
CLASSIFICATION_MODEL=MiniMax-M2.7

# Option 2: Auto-detection (just set the key)
MINIMAX_API_KEY=your-key

Files changed

File Change
automem/config.py Add LLM_PROVIDER, MINIMAX_API_KEY, MINIMAX_BASE_URL, MINIMAX_DEFAULT_MODEL
automem/service_runtime.py Extend init_openai() with MiniMax auto-detection and explicit selection
automem/classification/memory_classifier.py Temperature clamping + think-tag stripping for MiniMax models
automem/utils/text.py Temperature clamping + think-tag stripping for MiniMax in summarization
.env.example MiniMax config section
README.md Add MiniMax to Configuration section
INSTALLATION.md Add MiniMax env vars to config table
CLAUDE.md Add LLM_PROVIDER and MINIMAX_API_KEY docs
docs/ENVIRONMENT_VARIABLES.md Add MiniMax model pricing + LLM provider docs
tests/test_minimax_provider.py 26 unit tests covering init, temp clamping, think-tag stripping, summarization
tests/test_minimax_integration.py 4 integration tests (classification + summarization with real API)

11 files changed, 840 additions, 16 deletions

Test plan

  • 26 unit tests pass (pytest tests/test_minimax_provider.py)
  • 4 integration tests pass (MINIMAX_API_KEY=... pytest tests/test_minimax_integration.py)
  • Existing test_service_runtime.py tests still pass (backward compatible)
  • Verify LLM_PROVIDER=minimax with real MiniMax API key
  • Verify LLM_PROVIDER=auto fallback from OpenAI to MiniMax
  • Verify existing OpenAI workflow is unaffected

…arization

Add MiniMax (https://platform.minimaxi.com) as an alternative LLM provider
for memory type classification and auto-summarization, alongside OpenAI.

Changes:
- LLM_PROVIDER env var (auto/openai/minimax) for explicit provider selection
- Auto-detection: MINIMAX_API_KEY fallback when OPENAI_API_KEY is not set
- Temperature clamping to (0, 1] for MiniMax models
- <think> tag stripping for MiniMax M2.5+ reasoning traces
- MiniMax-M2.7 (204K context) as default model

Files: 11 changed, 840 additions
Tests: 26 unit + 4 integration tests
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant