Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
40 changes: 37 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,44 @@ export TELLERS_API_BASE=https://api.tellers.ai

## Usage

### Chat Commands
### Chat / Prompt

- `tellers "prompt"` — displays a minimal chat TUI from a streamed response
- `tellers --full-auto --background "prompt"` — starts a chat and prints only the chat id
Run a prompt against the Tellers agent:

- **`tellers "prompt"`** — Streams the response and starts a minimal REPL (reply in the terminal; Ctrl-C to exit).
- **`tellers --background "prompt"`** — Single request, no REPL; prints the response text (or last JSON result when using `--json-response`).
- **`tellers --full-auto --background "prompt"`** — Same as `--background` with full-auto behavior.

**Prompt options:**

| Flag | Description |
|------|-------------|
| `--no-interaction` | Single response only, no REPL. |
| `--json-response` | Use the JSON endpoint; output is the last `tellers.json_result` event (no interaction implied). |
| `--tool <TOOL_ID>` | Enable a tool (repeat for multiple). Omit to use default tools from settings. |
| `--llm-model <MODEL>` | LLM model (e.g. `gpt-5.4-2026-03-05`). |
| `--interactive`, `-i` | Interactively set options: JSON response (y/N), no interaction (y/N), tool selection (checkbox list), and LLM model (list). |

**Examples:**

```bash
# Streamed chat with REPL
tellers "Generate a video, with cats"

# Single response, no follow-up
tellers --no-interaction "Generate a video, with stock footage video of cats"

# JSON endpoint: only the last JSON result is printed
tellers --json-response "Generate a video, with stock footage video of cats"

# Choose model and tools via flags
tellers --llm-model gpt-5.4-2026-03-05 --tool tool_a --tool tool_b "Your prompt"

# Interactive: prompts for JSON, no-interaction, checkbox list for tools, model picker
tellers -i "Generate a video, with stock footage video of cats"
```

**Interactive tool selection:** When you use `-i` and the settings include available tools, a TUI checkbox list is shown. Use **↑/↓** to move, **Space** to toggle, **a** to toggle all, **Enter** to confirm. Checkboxes are pre-set from each tool’s `enabled` field in the settings JSON (missing = enabled).

### Upload Command

Expand Down
20 changes: 20 additions & 0 deletions src/cli.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,26 @@ pub struct Cli {
#[arg(long)]
pub background: bool,

/// Disable interaction with the agent (single response, no REPL).
#[arg(long)]
pub no_interaction: bool,

/// Use JSON response endpoint (no_interaction implied; SSE tellers.json_result events).
#[arg(long)]
pub json_response: bool,

/// Tool(s) to enable (can be repeated). Omit for default tools.
#[arg(long = "tool", value_name = "TOOL_ID")]
pub tools: Vec<String>,

/// LLM model to use (e.g. gpt-5.4-2026-03-05).
#[arg(long, value_name = "MODEL")]
pub llm_model: Option<String>,

/// Interactively set json_response, no_interaction, tools, and llm_model.
#[arg(short, long)]
pub interactive: bool,

#[arg()]
pub prompt: Option<String>,

Expand Down
Loading