Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 0 additions & 3 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,6 @@ jobs:
- name: Run linting
run: uv run ruff check llmpane

- name: Run type checking
run: uv run mypy llmpane

- name: Run tests
run: uv run pytest -v

Expand Down
6 changes: 3 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -182,9 +182,9 @@ cython_debug/
.abstra/

# Visual Studio Code
# Visual Studio Code specific template is maintained in a separate VisualStudioCode.gitignore
# Visual Studio Code specific template is maintained in a separate VisualStudioCode.gitignore
# that can be found at https://github.com/github/gitignore/blob/main/Global/VisualStudioCode.gitignore
# and can be added to the global gitignore or merged into this file. However, if you prefer,
# and can be added to the global gitignore or merged into this file. However, if you prefer,
# you could uncomment the following to ignore the entire vscode folder
# .vscode/

Expand All @@ -207,4 +207,4 @@ marimo/_lsp/
__marimo__/

# Claude
.claude
.claude
49 changes: 49 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ the data—llmpane just handles the conversation UI.

- **Type-safe end-to-end** - Pydantic models in Python, TypeScript types in React
- **SSE streaming** - Real-time token streaming that works naturally with FastAPI
- **Multimodal support** - Send images via paste or file upload for vision-capable LLMs
- **Server-side message IDs** - Simplifies database storage and conversation history
- **Built-in patterns** - ActionMessage for confirmations, RefinementMessage for iterative outputs
- **Fully customizable** - CSS variables for theming, headless mode for complete control
Expand Down Expand Up @@ -73,6 +74,54 @@ function App() {

## Patterns

### Image Input (Multimodal)

Users can send images alongside text for vision-capable LLMs:

```tsx
// Image upload is enabled by default
<ChatPane endpoint="/chat" allowImageUpload={true} />
```

**Frontend features:**
- Paste images from clipboard (Cmd/Ctrl+V)
- Click the image button to select files
- Preview attached images before sending
- Remove individual images with the X button

**Backend handling with Pydantic AI:**

```python
from llmpane.agent import ChatSession
from pydantic_ai import Agent

# Use a vision-capable model
agent = Agent("google-gla:gemini-2.0-flash")
session = ChatSession(agent=agent)

@app.post("/chat")
async def chat(request: ChatRequest):
# session.run() automatically converts ImagePart to BinaryContent
return create_sse_response(
session.run(request.conversation_id, request.message)
)
```

**Message content types:**

```python
from llmpane.models import MessageContent, TextPart, ImagePart

# String for text-only (backward compatible)
message: MessageContent = "Hello, world!"

# List of parts for multimodal
message: MessageContent = [
ImagePart(type="image", data="base64...", media_type="image/png"),
TextPart(type="text", text="What's in this image?"),
]
```

### ActionMessage

For "LLM proposes, user confirms" flows:
Expand Down
173 changes: 173 additions & 0 deletions examples/agentic-chat-example/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
# Agentic Chat Example

This example demonstrates llmpane's full agentic chat capabilities:

- **Server-side persistence** with `ConversationStore`
- **Pydantic AI integration** via `ChatSession`
- **Tool calling** with automatic execution
- **Conversation history** retrieval for reconnection

## Features Demonstrated

| Feature | Description |
|---------|-------------|
| `ChatSession` | Combines Pydantic AI agent with conversation persistence |
| `InMemoryStore` | Default conversation store (auto-created) |
| Tool calling | Agent can call `get_order`, `search_products`, etc. |
| SSE streaming | Real-time response streaming with tool use metadata |
| Conversation API | List, retrieve, and delete conversations |

## Quick Start

### 1. Set up environment

```bash
cd examples/agentic-chat-example

# Sync dependencies (installs llmpane from workspace)
uv sync

# Install frontend dependencies
cd frontend && npm install && cd ..
```

### 2. Set API key

```bash
export ANTHROPIC_API_KEY="your-api-key-here"
```

### 3. Run the server

```bash
uv run python main.py
```

Server starts at http://localhost:8002

### 4. Run the frontend (separate terminal)

```bash
cd frontend
npm run dev
```

Frontend starts at http://localhost:5173

## Frontend

The frontend uses `@llmpane/react` components for a complete chat experience:

- **ChatApp** - All-in-one component with sidebar + chat panel
- **ToolIndicator** - Shows when the agent calls tools
- **Conversation sidebar** - List, create, switch, delete conversations

The frontend code is minimal (~50 lines) thanks to llmpane's batteries-included approach.

## API Endpoints

### Chat

```bash
# Send a message (creates new conversation)
curl -X POST http://localhost:8002/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "What orders do you have?"}'

# Continue existing conversation
curl -X POST http://localhost:8002/api/chat \
-H "Content-Type: application/json" \
-d '{"message": "Tell me about ORD-001", "conversation_id": "conv_abc123"}'
```

### Conversations

```bash
# List all conversations
curl http://localhost:8002/api/conversations

# Get specific conversation
curl http://localhost:8002/api/conversations/conv_abc123

# Delete conversation
curl -X DELETE http://localhost:8002/api/conversations/conv_abc123
```

## Available Tools

The agent has access to these tools:

| Tool | Description |
|------|-------------|
| `get_order(order_id)` | Look up order by ID |
| `search_products(query, category?)` | Search products |
| `get_customer(email)` | Look up customer info |
| `list_recent_orders(limit?)` | List recent orders |

## Example Conversation

```
User: What's the status of order ORD-001?

Agent: [calls get_order("ORD-001")]
Order ORD-001 for Alice Johnson is currently shipped.
It contains Widget Pro and Gadget Mini, with a total of $149.99.

User: What other products do you have in the widgets category?

Agent: [calls search_products("", "widgets")]
We have the Widget Pro available for $99.99. It's currently in stock.
```

## Architecture

```
┌─────────────────────────────────────────────────────────────┐
│ FastAPI Application │
├─────────────────────────────────────────────────────────────┤
│ POST /api/chat │
│ ├── ChatSession.run() │
│ │ ├── Get/create conversation from store │
│ │ ├── Persist user message │
│ │ ├── Call Pydantic AI agent.run_stream() │
│ │ ├── Convert events → StreamChunk │
│ │ └── Persist assistant message │
│ └── create_sse_response() → SSE stream │
├─────────────────────────────────────────────────────────────┤
│ GET /api/conversations │
│ └── InMemoryStore.list_conversations() │
├─────────────────────────────────────────────────────────────┤
│ GET /api/conversations/{id} │
│ └── InMemoryStore.get_conversation() │
└─────────────────────────────────────────────────────────────┘
```

## Customization

### Using a different store

```python
from llmpane.store import create_store, set_default_store

# Use Redis for production
store = create_store("redis", url="redis://localhost:6379")
set_default_store(store)
```

### Adding more tools

```python
@agent.tool_plain
def my_custom_tool(param: str) -> dict:
"""Description shown to the LLM."""
return {"result": do_something(param)}
```

### Using a different model

```python
agent = Agent(
"openai:gpt-4o", # or any Pydantic AI supported model
system_prompt="...",
)
```
51 changes: 51 additions & 0 deletions examples/agentic-chat-example/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
"""Agentic chat example with persistence and tool calling.

This example demonstrates:
- Server-side conversation persistence with InMemoryStore
- Pydantic AI integration via ChatSession
- Tool calling with automatic execution
- Conversation history retrieval for reconnection
"""

from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from routes import chat_router, conversations_router


def create_app() -> FastAPI:
"""Create and configure the FastAPI application."""
app = FastAPI(
title="Agentic Chat Example",
description="Chat with tool calling, persistence, and Pydantic AI integration",
version="0.1.0",
)

# CORS for development
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)

# Register routes
app.include_router(chat_router)
app.include_router(conversations_router)

@app.get("/")
async def root():
return {
"message": "Agentic Chat API",
"docs": "/docs",
"endpoints": {
"chat": "POST /api/chat - Send a message and get streaming response",
"conversations": "GET /api/conversations - List all conversations",
"conversation": "GET /api/conversations/{id} - Get conversation history",
},
}

return app


app = create_app()
24 changes: 24 additions & 0 deletions examples/agentic-chat-example/frontend/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# Dependencies
node_modules/

# Build output
dist/

# Logs
*.log
npm-debug.log*

# Editor directories
.idea/
.vscode/
*.swp
*.swo

# OS files
.DS_Store
Thumbs.db

# Environment
.env
.env.local
.env.*.local
12 changes: 12 additions & 0 deletions examples/agentic-chat-example/frontend/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Agentic Chat - llmpane Demo</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/src/main.tsx"></script>
</body>
</html>
Loading