| title | Model A/B Testing |
|---|---|
| description | Test different LLM models to optimize performance, quality, and cost |
Run A/B tests on models with zero latency. Same session always gets same model (sticky assignment).
Create and manage your model configs in the [dashboard](https://bench.fallom.com). ```python from fallom import models# Get assigned model for this session
model = models.get("summarizer-config", session_id)
# Returns: "gpt-4o" or "claude-3-5-sonnet" based on your config weights
agent = Agent(model=model)
agent.run(message)
```
### Version Pinning
Pin to a specific config version, or use latest (default):
```python
# Use latest version (default)
model = models.get("my-config", session_id)
# Pin to specific version
model = models.get("my-config", session_id, version=2)
```
### Fallback for Resilience
Always provide a fallback so your app works even if Fallom is down:
```python
model = models.get(
"my-config",
session_id,
fallback="gpt-4o-mini" # Used if config not found or Fallom unreachable
)
```
// Get assigned model for this session
const model = await models.get("summarizer-config", sessionId);
// Returns: "gpt-4o" or "claude-3-5-sonnet" based on your config weights
const response = await openai.chat.completions.create({ model, ... });
```
### Fallback for Resilience
```typescript
const model = await models.get("my-config", sessionId, {
fallback: "gpt-4o-mini", // Used if config not found or Fallom unreachable
});
```