Skip to content

feat: add LiteLLM as AI gateway provider#379

Open
RheagalFire wants to merge 2 commits into
openagents-org:developfrom
RheagalFire:feat/add-litellm-provider
Open

feat: add LiteLLM as AI gateway provider#379
RheagalFire wants to merge 2 commits into
openagents-org:developfrom
RheagalFire:feat/add-litellm-provider

Conversation

@RheagalFire
Copy link
Copy Markdown

Summary

Adds LiteLLM as a new model provider, giving users access to 100+ LLM providers through a single LiteLLMProvider class. Follows the existing BaseModelProvider pattern exactly.

Motivation

OpenAgents currently supports several providers (OpenAI, Anthropic, Bedrock, Gemini, MiniMax) with dedicated classes, plus a SimpleGenericProvider for OpenAI-compatible APIs. LiteLLM is a lightweight Python SDK that routes requests to the correct provider based on the model string (e.g. anthropic/claude-sonnet-4-5, vertex_ai/gemini-pro, cohere/command-r-plus). This gives users access to providers like Vertex AI, Cohere, AI21, and self-hosted endpoints without needing new provider classes for each one.

Changes

  • sdk/src/openagents/lms/providers.py - new LiteLLMProvider class extending BaseModelProvider
  • sdk/src/openagents/lms/__init__.py - registered in imports and __all__
  • sdk/src/openagents/config/llm_configs.py - added LITELLM enum value, MODEL_CONFIGS entry, and create_model_provider factory case
  • pyproject.toml - added litellm>=1.55.0,<1.85 to sdk optional dependencies

Implementation

LiteLLMProvider follows the same pattern as OpenAIProvider:

  • chat_completion() calls litellm.acompletion() and returns the standardized {"content": ..., "tool_calls": [...], "usage": {...}} format
  • format_tools() uses OpenAI-compatible format (same as OpenAIProvider)
  • Tool calling support via tools and tool_choice parameters
  • Token usage extraction from the response

Example usage

from openagents.config.llm_configs import create_model_provider
import asyncio

# Use any LiteLLM-supported provider
provider = create_model_provider("litellm", "anthropic/claude-sonnet-4-5")

result = asyncio.run(provider.chat_completion(
    messages=[{"role": "user", "content": "Hello!"}]
))
print(result["content"])

Provider-specific API keys are read from environment variables automatically (e.g. ANTHROPIC_API_KEY, OPENAI_API_KEY). See https://docs.litellm.ai/docs/providers for the full list.

Tests

>>> provider = create_model_provider("litellm", "anthropic/claude-sonnet-4-5")
>>> result = asyncio.run(provider.chat_completion(
...     messages=[{"role": "user", "content": "What is 2+2? Answer with just the number."}]
... ))
>>> result["content"]
'4'
>>> result["usage"]
{'prompt_tokens': 20, 'completion_tokens': 5, 'total_tokens': 25}

Risk / Compatibility

  • Additive only. Existing providers untouched.
  • litellm>=1.55.0,<1.85 added to the sdk optional dependencies in pyproject.toml.
  • Response format matches the existing standardized format used by all other providers.

@vercel
Copy link
Copy Markdown

vercel Bot commented May 12, 2026

@RheagalFire is attempting to deploy a commit to the Raphael's projects Team on Vercel.

A member of the Team first needs to authorize it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant