Web UI + OpenAI-compatible API proxy for OpenRouter free models, auto-selected by Top Weekly ranking.
- Auto model selection — Picks the highest-ranked free model from OpenRouter's weekly leaderboard
- OpenAI-compatible API — Drop-in replacement for
/v1/chat/completionsand/v1/models - Web UI — Browse free models, configure API key, test chat in browser
- Streaming support — SSE streaming for real-time responses
- 5-min cache — Model list cached to reduce API calls
pip install -e .Set your OpenRouter API key:
export OPENROUTER_API_KEY="sk-or-..."Start the server:
python -m catchercurl http://localhost:8000/v1/modelscurl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENROUTER_API_KEY" \
-d '{"model":"auto","messages":[{"role":"user","content":"Hello!"}]}'curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENROUTER_API_KEY" \
-d '{"model":"qwen/qwen3.6-plus:free","messages":[{"role":"user","content":"Hello!"}]}'curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENROUTER_API_KEY" \
-d '{"model":"auto","messages":[{"role":"user","content":"Hello!"}],"stream":true}'| Endpoint | Description |
|---|---|
GET / |
Web UI |
GET /api/models |
Free models with full details (ranked) |
GET /v1/models |
OpenAI-compatible model list |
POST /v1/chat/completions |
OpenAI-compatible chat proxy |
| Variable | Description | Default |
|---|---|---|
OPENROUTER_API_KEY |
OpenRouter API key (or pass via Authorization header) |
— |
- FastAPI — async web framework
- httpx — async HTTP client
- uvicorn — ASGI server
