Applies WebGL Chroma Key Green Screen Transparency
+ INSTRUCTIONS:
+ 1. Create conversation with "apply_greenscreen" property set to true.
+ 2. Enter room URL and click join. The replica's video/audio feed should display with background transparent.
+ 3. Experiment with changing the background color of the page to see what the replica looks like against different colors.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/examples/custom_ending_message/README.md b/examples/custom_ending_message/README.md
new file mode 100644
index 0000000..d1f6ad1
--- /dev/null
+++ b/examples/custom_ending_message/README.md
@@ -0,0 +1,89 @@
+# Tavus Custom Meeting End Message Demo
+
+This demo shows how to implement custom end messages for Tavus video meetings using Daily.co's JavaScript SDK.
+
+[LIVE DEMO](https://andy-tavus.github.io/custom_ending_message/)
+
+## Overview
+
+When a video meeting ends, Daily.co shows default messages like these:
+
+
+
+
+This demo replaces these default messages with custom ones for different end scenarios:
+
+1. User voluntarily leaving the meeting
+2. Meeting ending due to an error
+3. Host ending the meeting
+
+## Implementation Details
+
+The implementation uses Daily.co's JavaScript SDK to:
+- Create and manage the video meeting iframe
+- Listen for meeting end events
+- Replace the default end messages with custom ones based on the end scenario
+
+### How It Works
+
+The key to replacing Daily.co's default end messages is timing and DOM manipulation:
+
+1. We set up event listeners that fire before Daily.co shows their default messages
+2. When an end event occurs, we immediately:
+ - Clear the Daily.co iframe from the DOM (`meetingArea.innerHTML = ''`)
+ - Show our own message div instead
+ - This prevents Daily.co's default UI from appearing
+
+### Key Components
+
+1. **HTML Structure**
+ ```html
+
+
+
+
+ ```
+
+2. **Event Listeners**
+ - `left-meeting`: Triggered when user leaves the meeting
+ - `error`: Triggered when an error ends the meeting
+ - `nonfatal-error`: Triggered when host ends the meeting (check for `action === 'end-meeting'`)
+
+3. **Message Display**
+ - Uses a dedicated div that's hidden by default
+ - Shows different messages based on the end scenario
+ - Clears the meeting iframe when showing the end message
+
+## How to Use
+
+1. Clone this repository
+2. Open `index.html` in a web browser
+3. Enter a valid Daily.co room URL
+4. Click "Join" to start the meeting
+5. Test different end scenarios to see custom messages
+
+## Customization
+
+To customize the end messages, modify the text in the event listeners in `index.html`:
+
+```javascript
+callFrame.on('left-meeting', () => {
+ showEndMessage('You left the meeting.');
+});
+
+callFrame.on('error', (e) => {
+ showEndMessage('The meeting ended due to an error.');
+});
+
+callFrame.on('nonfatal-error', (e) => {
+ if (e?.action === 'end-meeting') {
+ showEndMessage('The meeting was ended by the host.');
+ }
+});
+```
+
+## Resources
+
+- [Tavus Documentation](https://docs.tavus.io)
+- [Daily.co JavaScript SDK Documentation](https://docs.daily.co/reference/daily-js)
+- [Source Code](https://github.com/andy-tavus/custom_ending_message)
diff --git a/examples/custom_ending_message/index.html b/examples/custom_ending_message/index.html
new file mode 100644
index 0000000..9dc3f57
--- /dev/null
+++ b/examples/custom_ending_message/index.html
@@ -0,0 +1,161 @@
+
+
+
+ Tavus - Custom Meeting End Message Demo
+
+
+
+
+
By default, Daily.co shows these messages when a meeting ends:
+
+
+
+
+
+
+
This demo shows how to replace these with custom messages by:
+
+
Creating a hidden message div in your HTML
+
Using Daily.co event listeners to detect different end scenarios:
+
+
left-meeting: User clicks "leave"
+
error: Connection/technical issues
+
nonfatal-error with end-meeting action: Host ends call
+
+
+
Showing your custom message when these events occur
+
+
+
To see it in action:
+
+
Generate a Daily.co room URL
+
Paste it below and join
+
Try different end scenarios to see custom messages
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/examples/interactions-protocol-playground/README.md b/examples/interactions-protocol-playground/README.md
new file mode 100644
index 0000000..fef57ee
--- /dev/null
+++ b/examples/interactions-protocol-playground/README.md
@@ -0,0 +1,54 @@
+# Interactions Protocol Playground
+### [LIVE DEMO](https://andy-tavus.github.io/interactions-protocol-playground/)
+
+
+The **Interactions Protocol Playground** web app allows users to interact with a Daily video room integrated with Tavus's backend for testing replica interactions. Specifically, it utilizes Tavus's [Interactions Protocol](https://docs.tavus.io/api-reference/interactions-protocol). Here's a breakdown of its components and interactions:
+
+## 1. Room Management & Video Display
+ - **Conversation ID Input**: Users enter a Tavus `conversation_id` in the input box and press the "Join" button to connect to that specific conversation room.
+ - **Join/Leave Controls**: Dedicated Join and Leave buttons with proper state management - Join is disabled while connected, Leave is disabled when not connected.
+ - **Video Display**: Central video container shows the replica participant's video feed with automatic audio playback when available.
+ - **Dynamic Updates**: When a new conversation ID is entered, it updates all interaction text areas with the current ID.
+ - **Error Handling**: Prevents duplicate Daily.js instances and provides user-friendly error messages.
+
+## 2. Interaction Controls & Event Types
+ The app provides four main interaction types, each with editable JSON payloads:
+
+ - **Echo** (`conversation.echo`): Makes the replica speak specific text directly without processing through the LLM.
+ - **Respond** (`conversation.respond`): Simulates user input that the replica will process and respond to naturally.
+ - **Interrupt** (`conversation.interrupt`): Stops the replica from speaking immediately.
+ - **Context Management**: Toggle between two context operations:
+ - **Overwrite** (`conversation.overwrite_llm_context`): Replaces the entire conversational context
+ - **Append** (`conversation.append_llm_context`): Adds information to the existing context
+
+ - **Message Execution**: Each interaction uses the `executeCode` function to parse JSON from text areas and send via `callObject.sendAppMessage()`.
+
+## 3. Comprehensive Event Logging
+ - **Real-time Event Log**: All events (sent and received) are captured in a detailed table format showing:
+ - **Timestamp**: Precise time in HH:MM:SS format
+ - **Event Type**: Abbreviated event names with color coding
+ - **Direction**: "F" (From Tavus) or "T" (To Tavus) indicating message direction
+ - **Role**: Speaker role (user, replica, etc.)
+ - **Text**: Extracted speech, text content, or context data
+ - **Inference ID**: Truncated inference identifier for tracking
+
+ - **Event Types Tracked**:
+ - Speaking events: `user.started_speaking`, `user.stopped_speaking`, `replica.started_speaking`, `replica.stopped_speaking`
+ - Content events: `conversation.utterance` (transcribed speech)
+ - Control events: All interaction types sent to Tavus
+
+ - **Visual Organization**:
+ - Color-coded events (cool colors for received, warm colors for sent)
+ - Interactive legend explaining all event types and colors
+ - Automatic scrolling to show latest events
+ - Limited to 250 entries for performance
+
+ - **Export Functionality**: CSV export of all logged events with full timestamps and details.
+
+## 4. Technical Implementation
+ - **Daily.js Integration**: Uses Daily.js SDK for video calling with proper cleanup and error handling
+ - **Participant Management**: Automatically detects and displays existing participants in the room
+ - **State Management**: Robust button state management and call object lifecycle handling
+ - **Responsive Design**: Fixed video positioning with responsive layout
+
+This setup provides a comprehensive testing environment for Tavus replica interactions, with detailed logging and real-time feedback for all conversation events.
diff --git a/examples/interactions-protocol-playground/index.html b/examples/interactions-protocol-playground/index.html
new file mode 100644
index 0000000..da8e5cc
--- /dev/null
+++ b/examples/interactions-protocol-playground/index.html
@@ -0,0 +1,179 @@
+
+
+
+
+
+ Tavus Interactions Protocol Playground
+
+
+
+
+
+
+
+
+
+
diff --git a/examples/microphone-only-cvi/.gitignore b/examples/microphone-only-cvi/.gitignore
new file mode 100644
index 0000000..cedc1cc
--- /dev/null
+++ b/examples/microphone-only-cvi/.gitignore
@@ -0,0 +1,58 @@
+# OS generated files
+.DS_Store
+.DS_Store?
+._*
+.Spotlight-V100
+.Trashes
+ehthumbs.db
+Thumbs.db
+
+# Editor directories and files
+.vscode/
+.idea/
+*.swp
+*.swo
+*~
+
+# Logs
+logs
+*.log
+npm-debug.log*
+yarn-debug.log*
+yarn-error.log*
+
+# Runtime data
+pids
+*.pid
+*.seed
+*.pid.lock
+
+# Dependency directories
+node_modules/
+
+# Optional npm cache directory
+.npm
+
+# Optional REPL history
+.node_repl_history
+
+# Output of 'npm pack'
+*.tgz
+
+# Yarn Integrity file
+.yarn-integrity
+
+# Environment variables
+.env
+.env.local
+.env.development.local
+.env.test.local
+.env.production.local
+
+# Build outputs
+dist/
+build/
+
+# Temporary files
+*.tmp
+*.temp
diff --git a/examples/microphone-only-cvi/README.md b/examples/microphone-only-cvi/README.md
new file mode 100644
index 0000000..5fc4f07
--- /dev/null
+++ b/examples/microphone-only-cvi/README.md
@@ -0,0 +1,69 @@
+# Tavus CVI Microphone-Only Example
+
+A voice-only interface for conversing with Tavus Replicas using microphone input. This example demonstrates how to selectively join Daily.co streams depending on app requirements.
+
+[LIVE DEMO](https://andy-tavus.github.io/microphone-only-cvi/)
+
+## Getting Started
+
+### Prerequisites
+
+- A Tavus account with API access
+- A conversation ID from the [Tavus Platform](https://platform.tavus.io) or [Create Conversation API](https://docs.tavus.io/api-reference/conversations/create-conversation)
+
+### Setup
+
+1. Clone this repository:
+ ```bash
+ git clone https://github.com/andy-tavus/microphone-only-cvi.git
+ cd microphone-only-cvi
+ ```
+
+2. Open `index.html` in a web browser or serve it using a local web server:
+ ```bash
+ # Using Python 3
+ python -m http.server 8000
+
+ # Using Node.js (if you have http-server installed)
+ npx http-server
+ ```
+
+3. Navigate to `http://localhost:8000` in your browser
+
+### Usage
+
+1. **Create a Conversation**: Generate a conversation using the [Tavus Platform](https://platform.tavus.io) or the [Create Conversation API](https://docs.tavus.io/api-reference/conversations/create-conversation)
+
+2. **Enter Conversation ID**: Input your conversation ID in the text field
+
+3. **Start Voice Chat**: Click "Start Voice Chat" and allow microphone access when prompted
+
+4. **Talk to Your Replica**: Speak naturally - your replica will respond with both audio and video
+
+5. **Use Controls**:
+ - Click the **Mute** button to toggle your microphone
+ - Click **Leave** to end the conversation
+
+## How It Works
+
+This application uses:
+
+- **[Daily.co](https://daily.co)** for WebRTC video calling infrastructure
+- **[Tavus API](https://docs.tavus.io)** for AI replica conversations [[memory:2987291]]
+- **Web Audio API** for microphone access and audio processing
+
+The flow is:
+1. User joins a Daily.co room associated with their Tavus conversation
+2. Microphone audio is streamed to the Tavus replica
+3. The replica processes speech and responds with audio/video
+4. User sees and hears the replica's response in real-time
+
+## Configuration
+
+### URL Parameters
+
+You can pre-populate the conversation ID using URL parameters:
+
+```
+https://your-domain.com/?conversation_id=c123abc
+```
diff --git a/examples/microphone-only-cvi/index.html b/examples/microphone-only-cvi/index.html
new file mode 100644
index 0000000..78bd965
--- /dev/null
+++ b/examples/microphone-only-cvi/index.html
@@ -0,0 +1,219 @@
+
+
+
+
+
+ Tavus CVI Microphone-Only Example
+
+
+
+
+
+
+ Note: You can also append ?conversation_id=YOUR_ID to the URL
+
+
+
+
+ Microphone Active
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/examples/microphone-only-cvi/styles.css b/examples/microphone-only-cvi/styles.css
new file mode 100644
index 0000000..a18cfa9
--- /dev/null
+++ b/examples/microphone-only-cvi/styles.css
@@ -0,0 +1,317 @@
+/* General Styling */
+body {
+ background-color: #1e1e1e;
+ font-family: 'Courier New', Courier, monospace;
+ color: #f1f1f1;
+ margin: 0;
+ padding: 20px;
+ max-width: 1200px;
+ margin: 0 auto;
+}
+
+/* Header Container */
+.header-container {
+ text-align: center;
+ margin-bottom: 30px;
+ padding: 15px;
+ border-bottom: 2px solid #df44a6;
+}
+
+.header-container h1 {
+ color: #df44a6;
+ margin-bottom: 10px;
+ font-size: 2em;
+}
+
+.description {
+ max-width: 600px;
+ margin: 0 auto;
+ line-height: 1.4;
+}
+
+.description p {
+ margin: 5px 0;
+ font-size: 1em;
+}
+
+.links {
+ margin-top: 8px;
+ font-size: 0.9em;
+}
+
+.links .separator {
+ color: #666;
+ margin: 0 10px;
+}
+
+.source-link {
+ margin-top: 10px;
+ font-size: 0.9em;
+}
+
+/* Main Container */
+.main-container {
+ display: flex;
+ gap: 20px;
+ max-width: 1400px;
+ margin: 0 auto;
+ padding: 0 20px;
+ justify-content: center;
+ align-items: flex-start;
+}
+
+/* Instructions Container */
+.instructions-container {
+ flex: 0 0 40%;
+ max-width: 500px;
+ background-color: rgba(30, 30, 30, 0.9);
+ padding: 30px;
+ border-radius: 10px;
+ border: 2px solid #df44a6;
+ box-shadow: 0 0 15px #df44a6;
+ height: fit-content;
+}
+
+.instructions-container h2 {
+ color: #df44a6;
+ margin: 0 0 15px 0;
+ font-size: 1.2em;
+ text-transform: uppercase;
+ letter-spacing: 1px;
+ border-bottom: 1px solid #df44a6;
+ padding-bottom: 10px;
+}
+
+.instructions-container ol {
+ background: none;
+ border: none;
+ box-shadow: none;
+ padding: 0;
+ margin: 0 0 20px 0;
+ list-style-position: inside;
+}
+
+.instructions-container li {
+ margin: 15px 0;
+ line-height: 1.6;
+ padding-left: 5px;
+}
+
+.conversation-input {
+ margin-top: 20px;
+ padding: 20px;
+ background-color: rgba(30, 30, 30, 0.5);
+ border-radius: 5px;
+ border: 1px solid #df44a6;
+}
+
+.conversation-input input {
+ width: 100%;
+ padding: 10px;
+ font-size: 16px;
+ background-color: #1e1e1e;
+ color: #f1f1f1;
+ border: 2px solid #df44a6;
+ border-radius: 5px;
+ box-shadow: 0 0 5px #df44a6;
+ font-family: 'Courier New', Courier, monospace;
+ margin-bottom: 10px;
+}
+
+.conversation-input input:focus {
+ outline: none;
+ box-shadow: 0 0 15px #df44a6;
+}
+
+.conversation-input .note {
+ display: block;
+ color: #888;
+ font-size: 0.9em;
+ margin-top: 5px;
+}
+
+/* Microphone Status */
+.mic-status {
+ display: flex;
+ align-items: center;
+ justify-content: center;
+ margin-top: 15px;
+ padding: 10px;
+ background-color: rgba(223, 68, 166, 0.1);
+ border: 1px solid #df44a6;
+ border-radius: 5px;
+ color: #df44a6;
+ font-size: 0.9em;
+}
+
+.mic-indicator {
+ width: 8px;
+ height: 8px;
+ background-color: #df44a6;
+ border-radius: 50%;
+ margin-right: 8px;
+ animation: pulse 1.5s infinite;
+}
+
+@keyframes pulse {
+ 0% {
+ opacity: 1;
+ transform: scale(1);
+ }
+ 50% {
+ opacity: 0.7;
+ transform: scale(1.1);
+ }
+ 100% {
+ opacity: 1;
+ transform: scale(1);
+ }
+}
+
+h1, h2, h3 {
+ color: #df44a6;
+}
+
+a {
+ color: #df44a6;
+ text-decoration: none;
+ transition: all 0.3s ease;
+}
+
+a:hover {
+ color: #e75bbc;
+ text-shadow: 0 0 10px #df44a6;
+}
+
+a:visited {
+ color: #bfbfbf;
+}
+
+/* Video Section */
+.video-section {
+ flex: 0 0 60%;
+ max-width: 800px;
+ display: flex;
+ flex-direction: column;
+ gap: 20px;
+}
+
+#video-container {
+ display: none;
+ width: 100%;
+ aspect-ratio: 16/9;
+ background: #1e1e1e;
+ border-radius: 10px;
+ border: 2px solid #df44a6;
+ box-shadow: 0 0 15px #df44a6;
+ overflow: hidden;
+}
+
+#video-container video {
+ width: 100%;
+ height: 100%;
+ object-fit: cover;
+ margin: 0;
+}
+
+/* Voice Controls */
+#voice-controls {
+ display: none;
+ width: 100%;
+ gap: 15px;
+ justify-content: center;
+}
+
+.voice-control-btn {
+ padding: 12px 24px;
+ font-size: 16px;
+ background-color: #df44a6;
+ color: white;
+ border: none;
+ border-radius: 5px;
+ cursor: pointer;
+ font-family: 'Courier New', Courier, monospace;
+ box-shadow: 0 0 10px #df44a6;
+ transition: all 0.3s ease;
+ min-width: 120px;
+}
+
+.voice-control-btn:hover {
+ background-color: #e75bbc;
+ box-shadow: 0 0 15px #df44a6;
+ transform: translateY(-2px);
+}
+
+.voice-control-btn:active {
+ transform: translateY(0);
+}
+
+/* Start Button */
+#start-btn {
+ width: 100%;
+ margin-top: 20px;
+ background-color: #df44a6;
+ border: none;
+ color: white;
+ padding: 15px 30px;
+ text-align: center;
+ text-decoration: none;
+ display: block;
+ font-size: 18px;
+ border-radius: 5px;
+ box-shadow: 0 0 10px #df44a6;
+ transition: all 0.3s ease;
+ cursor: pointer;
+ font-family: 'Courier New', Courier, monospace;
+}
+
+#start-btn:hover {
+ background-color: #e75bbc;
+ box-shadow: 0 0 15px #df44a6;
+}
+
+/* Instructions */
+ol {
+ background-color: rgba(30, 30, 30, 0.9);
+ padding: 20px 40px;
+ border-radius: 10px;
+ border: 2px solid #df44a6;
+ box-shadow: 0 0 15px #df44a6;
+ max-width: 600px;
+ margin: 20px auto;
+}
+
+code {
+ background-color: #2a2a2a;
+ padding: 2px 5px;
+ border-radius: 3px;
+ color: #df44a6;
+ font-family: 'Courier New', Courier, monospace;
+}
+
+/* Responsive Design */
+@media (max-width: 768px) {
+ .main-container {
+ flex-direction: column;
+ gap: 20px;
+ }
+
+ .instructions-container {
+ flex: none;
+ max-width: none;
+ }
+
+ .video-section {
+ flex: none;
+ max-width: none;
+ }
+
+ #voice-controls {
+ flex-direction: column;
+ align-items: center;
+ }
+
+ .voice-control-btn {
+ width: 200px;
+ }
+}
diff --git a/examples/red_or_black/.gitignore b/examples/red_or_black/.gitignore
new file mode 100644
index 0000000..b70730f
--- /dev/null
+++ b/examples/red_or_black/.gitignore
@@ -0,0 +1,3 @@
+# Persona files
+persona_id.txt
+red_or_black_persona.sh
\ No newline at end of file
diff --git a/examples/red_or_black/README.md b/examples/red_or_black/README.md
new file mode 100644
index 0000000..cc32771
--- /dev/null
+++ b/examples/red_or_black/README.md
@@ -0,0 +1,158 @@
+# Red or Black Card Game
+
+A demo application that showcases the implementation of tool calls with [Tavus Conversational Video Interface (CVI)](https://docs.tavus.io/sections/conversational-video-interface/what-is-cvi-overview) replicas.
+
+[LIVE DEMO](https://andy-tavus.github.io/red_or_black/)
+
+## Overview
+
+This application demonstrates a simple card game where a virtual dealer (Tavus replica) asks the user to guess whether the next card drawn will be red or black. The application uses Tavus CVI to create an interactive AI host that responds to user inputs and performs actions through tool calls.
+
+## Why a Persona is Required
+
+A Tavus persona is essential for this application because it defines the AI's behavior and capabilities, particularly through tool calls. The persona configuration includes:
+- The system prompt that gives the AI its role as a card game host
+- The tool call definition that enables the AI to detect and process user guesses
+- The context that helps the AI understand the game rules and flow
+
+Without a properly configured persona, the AI wouldn't know how to recognize user guesses or trigger the card drawing functionality. The persona acts as the "brain" of the game, connecting user interactions to the game mechanics through tool calls.
+
+## Technical Implementation
+
+### Core Technologies
+
+- HTML, CSS, and JavaScript for the frontend interface
+- [Daily.co API](https://docs.daily.co/) for video communication
+- [Tavus API](https://docs.tavus.io/) for creating and managing the AI persona
+- [Deck of Cards API](https://deckofcardsapi.com/) for card drawing functionality
+
+### Key Components
+
+1. **User Interface**: A casino-themed interface with video display for the AI dealer and card display area
+2. **Tavus CVI Integration**: Creates conversational AI interactions with a persona designed to host the card game
+3. **Tool Call Implementation**: Enables the AI persona to detect user color guesses and trigger card draws
+
+## Tool Call Implementation - Step by Step
+
+The central feature of this demo is the implementation of tool calls that allow the Tavus AI to interact with external systems. Here's how it works:
+
+### 1. Persona Configuration
+
+The persona is configured with a tool called `detect_color` that records the user's guess:
+
+```json
+{
+ "type": "function",
+ "function": {
+ "name": "detect_color",
+ "description": "Record the user's guess of whether the next card will be red or black",
+ "parameters": {
+ "type": "object",
+ "properties": {
+ "guess": {
+ "type": "string",
+ "description": "The color guessed by the user (red or black)",
+ "enum": ["red", "black"]
+ }
+ },
+ "required": ["guess"]
+ }
+ }
+}
+```
+
+This definition is crucial as it instructs the AI on what to listen for in user utterances. When a user says "red" or "black" (or phrases containing these words), the AI recognizes this as a color guess that matches the tool's parameters and automatically triggers the tool call. The `description` field helps the AI understand when to use this tool, while the `enum` array limits valid values to only "red" or "black", preventing the AI from attempting to pass invalid colors. The AI uses this definition to map natural language inputs to structured function calls without requiring explicit programming for each possible user utterance variation.
+
+### 2. Message Handling
+
+The application listens for app messages from the Daily call:
+
+```javascript
+call.on('app-message', handleAppMessage);
+```
+
+### 3. Tool Call Detection
+
+When a tool call is initiated by the AI, the app processes it:
+
+```javascript
+if (message.message_type === 'conversation' && message.event_type === 'conversation.tool_call') {
+ const toolCall = message.properties;
+
+ if (toolCall.name === 'detect_color') {
+ // Process the tool call
+ }
+}
+```
+
+### 4. Parameter Extraction
+
+The application extracts the user's guess from the tool call arguments:
+
+```javascript
+const args = JSON.parse(toolCall.arguments);
+const guess = args.guess; // Will be either "red" or "black"
+```
+
+### 5. External API Integration
+
+The app then draws a card using the Deck of Cards API:
+
+```javascript
+const response = await fetch('https://deckofcardsapi.com/api/deck/new/draw/?count=1');
+const data = await response.json();
+const card = data.cards[0];
+```
+
+### 6. Result Processing
+
+The application determines if the user's guess was correct:
+
+```javascript
+const isRed = card.suit === 'HEARTS' || card.suit === 'DIAMONDS';
+const isCorrect = (guess === 'red' && isRed) || (guess === 'black' && !isRed);
+```
+
+### 7. Response to AI
+
+The result is sent back to the AI as an echo message:
+
+```javascript
+const responseMessage = {
+ message_type: "conversation",
+ event_type: "conversation.echo",
+ conversation_id: message.conversation_id,
+ properties: {
+ text: isCorrect ?
+ `You guessed right! The card was the ${cardValue} of ${cardSuit}.` :
+ `You guessed wrong! The card was the ${cardValue} of ${cardSuit}.`
+ }
+};
+call.sendAppMessage(responseMessage, '*');
+```
+
+This message allows the AI to continue the conversation with knowledge of the result.
+
+## Full Tool Call Flow
+
+1. **User speaks**: The user says "red" or "black" to make their guess
+2. **AI processes**: The AI recognizes this as a color guess and initiates a tool call
+3. **Tool call triggered**: The `detect_color` function is called with the user's guess
+4. **Application responds**: The app draws a card and determines if the guess was correct
+5. **Echo message**: The result is sent back to the AI
+6. **AI continues**: The AI uses the result to continue the conversation naturally
+
+## Getting Started
+
+1. Clone the repository
+2. Obtain a Tavus API key from your Tavus account
+3. Open `index.html` in a browser
+4. Enter your Persona ID and Tavus API key
+5. Click "Join the Game" to start the conversation
+
+## Resources
+
+- [Tavus Documentation](https://docs.tavus.io/)
+- [Tavus CVI Overview](https://docs.tavus.io/sections/conversational-video-interface/what-is-cvi-overview)
+- [Creating a Persona](https://docs.tavus.io/sections/conversational-video-interface/creating-a-persona)
+- [Custom LLM Onboarding](https://docs.tavus.io/sections/conversational-video-interface/custom-llm-onboarding)
diff --git a/examples/red_or_black/index.html b/examples/red_or_black/index.html
new file mode 100644
index 0000000..affd365
--- /dev/null
+++ b/examples/red_or_black/index.html
@@ -0,0 +1,807 @@
+
+
+
+
+
+ Red or Black Card Game
+
+
+
+
+
+
+
+
+
+
{
+ "persona_name": "Card Game Host v2",
+ "system_prompt": "You are a friendly card game host who guides users through a game of Red or Black. In this game, users guess whether the next card will be red or black.",
+ "context": "Red or Black is a simple card game where players guess the color of the next card to be drawn from the deck. When a user says \"red\" or \"black\", you should acknowledge their guess and use the detect_color tool to record their choice.",
+ "default_replica_id": "r6583a465c",
+ "layers": {
+ "llm": {
+ "tools": [
+ {
+ "type": "function",
+ "function": {
+ "name": "detect_color",
+ "parameters": {
+ "type": "object",
+ "required": ["guess"],
+ "properties": {
+ "guess": {
+ "enum": ["red", "black"],
+ "type": "string",
+ "description": "The color guessed by the user (red or black)"
+ }
+ }
+ },
+ "description": "Record the user's guess of whether the next card will be red or black"
+ }
+ }
+ ]
+ },
+ "tts": {
+ "tts_engine": "cartesia",
+ "tts_emotion_control": true
+ },
+ "perception": {
+ "perception_model": "raven-0"
+ },
+ "stt": {
+ "stt_engine": "tavus-advanced",
+ "participant_pause_sensitivity": "high",
+ "participant_interrupt_sensitivity": "high",
+ "smart_turn_detection": true
+ }
+ }
+}