Skip to content

Latest commit

 

History

History
115 lines (89 loc) · 3.12 KB

File metadata and controls

115 lines (89 loc) · 3.12 KB
title Quickstart
description Start tracing your LLM calls in under 5 minutes
## Installation
```bash
pip install fallom

# With auto-instrumentation for your LLM provider:
pip install fallom opentelemetry-instrumentation-openai
pip install fallom opentelemetry-instrumentation-anthropic
```

## Quick Start

<Warning>
  **Import order matters!** You must import and initialize Fallom **before** importing OpenAI or other LLM libraries.
</Warning>

```python
import fallom
fallom.init(api_key="your-api-key")

# NOW import OpenAI (after instrumentation is set up)
from openai import OpenAI
client = OpenAI()

# Set default session context for tracing
fallom.trace.set_session("my-agent", session_id)

# All LLM calls are now automatically traced!
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello!"}]
)
```
## Installation
```bash
npm install @fallom/trace
```

## Quick Start

```typescript
import fallom from "@fallom/trace";
import OpenAI from "openai";

// Initialize Fallom
await fallom.init({ apiKey: "your-api-key" });

// Wrap your LLM client for automatic tracing
const openai = fallom.trace.wrapOpenAI(new OpenAI());

// Set session context
fallom.trace.setSession("my-agent", sessionId);

// All LLM calls are now automatically traced!
const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [{ role: "user", content: "Hello!" }],
});
```

What Gets Traced?

Every LLM call is automatically captured with:

Field Description
Model The LLM model used (e.g., gpt-4o, claude-3-opus)
Tokens Input and output token counts
Latency Request duration in milliseconds
Prompts Full prompt messages sent to the model
Completions Model responses
Session Your config key and session ID for grouping

Get Your API Key

Create a free account at [bench.fallom.com](https://bench.fallom.com) Set up your first project in the [dashboard](https://bench.fallom.com) Find your API key in project settings Get your API key and view your traces

Next Steps

Test different models to optimize cost and performance. Experiment with prompts to improve outputs.