-
Notifications
You must be signed in to change notification settings - Fork 0
Installation & Quick Start
Get your first LLM trace flowing in under 5 minutes.
- Python 3.8+
- A running tmam instance — either [self-hosted](Self-Hosting-with-Docker) or the [cloud](https://cloud.tmam.ai)
- Your tmam public key and secret key (from Settings → API Keys in the dashboard)
pip install tmamCall init() once at application startup, before any LLM calls:
from tmam import init
init(
url="http://localhost:5050/api/sdk", # your tmam server endpoint
public_key="pk-tmam-xxxxxxxx",
secrect_key="sk-tmam-xxxxxxxx",
application_name="my-app",
environment="production",
)Cloud users: replace the
urlwithhttps://cloud.tmam.ai/api/sdkand use your cloud API keys.
No code changes required. tmam wraps supported libraries automatically:
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What is observability?"}]
)
print(response.choices[0].message.content)Every call is now automatically traced with token counts, cost, latency, model name, and optionally message content.
Open the dashboard at http://localhost:3001 and navigate to Observations → Requests to see your first traces.
from tmam import init
init(
# Required
url="http://localhost:5050/api/sdk",
public_key="pk-tmam-xxxxxxxx",
secrect_key="sk-tmam-xxxxxxxx",
# Identify your app
application_name="my-app", # default: "default"
environment="production", # default: "default"
# Features
capture_message_content=True, # trace prompt/completion text (default: True)
collect_gpu_stats=False, # enable GPU metrics (default: False)
disable_metrics=False, # disable OTel metrics (default: False)
disable_batch=False, # send spans immediately (default: False)
# Selective instrumentation
disabled_instrumentors=["openai"], # skip specific providers
# Guardrails
guardrail_id="your-guardrail-id", # default guardrail for Detect calls
# Advanced
pricing_json="/path/to/pricing.json", # custom model pricing
)See the full [SDK Initialization](SDK-Initialization) page for detailed option descriptions.
Once initialized, tmam auto-detects installed libraries and instruments them. No extra imports needed.
✅ OpenAI detected → OpenAI calls instrumented
✅ anthropic detected → Anthropic calls instrumented
✅ langchain detected → LangChain chains instrumented
✅ chromadb detected → Chroma queries instrumented
See [Supported Integrations](Supported-Integrations) for the full list.
- [Self-Hosting with Docker](Self-Hosting-with-Docker) — Run your own tmam server
- [Supported Integrations](Supported-Integrations) — See all 40+ supported libraries
- [Manual Tracing](Manual-Tracing) — Add custom spans to your own functions
- [GPU Monitoring](GPU-Monitoring) — Enable GPU metrics collection