The LLM module handles communication with Large Language Model APIs.
Represents a message in the conversation.
pub struct Message {
pub role: String, // "user", "assistant", or "system"
pub content: String, // Message content
}Methods:
new(role: &str, content: &str) -> Self- Creates a new message with the given role and content
Represents a response from the LLM API.
pub struct LlmResponse {
pub content: String, // Response text
pub tool_calls: Vec<ToolCall>, // Requested tool calls
pub tokens_used: u64, // Tokens consumed
}pub async fn send_message(
config: &LlmConfig,
messages: &[Message],
) -> Result<LlmResponse, Box<dyn std::error::Error>>Sends a message to the LLM API and returns the response.
Parameters:
config: LLM configuration (API key, base URL, model)messages: Conversation history
Returns:
Ok(LlmResponse): Successful response with content and tool callsErr(error): API error, network error, or parse error
Example:
let config = LlmConfig { /* ... */ };
let messages = vec![
Message::new("user", "Hello!"),
];
let response = send_message(&config, &messages).await?;
println!("Response: {}", response.content);pub async fn send_message_streaming(
config: &LlmConfig,
messages: &[Message],
callback: impl Fn(String),
) -> Result<LlmResponse, Box<dyn std::error::Error>>Sends a message with streaming support, calling the callback for each chunk.
Parameters:
config: LLM configurationmessages: Conversation historycallback: Function called with each content chunk
Returns:
Ok(LlmResponse): Complete response after streamingErr(error): API or network error
Example:
let response = send_message_streaming(&config, &messages, |chunk| {
print!("{}", chunk);
}).await?;pub fn estimate_tokens(text: &str) -> u64Estimates the number of tokens in a text string.
Parameters:
text: Text to estimate
Returns:
- Estimated token count (approximately text.len() / 4)
Note: This is a rough approximation. Actual token counts may vary by model.
pub fn format_tool_results(tool_results: &[(String, String)]) -> StringFormats tool execution results for sending back to the LLM.
Parameters:
tool_results: Vector of (tool_name, result) tuples
Returns:
- Formatted string describing tool results
The agent module provides tool execution capabilities.
Represents a tool call requested by the LLM.
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct ToolCall {
pub id: String,
pub name: String,
pub arguments: serde_json::Value,
}Fields:
id: Unique identifier for the tool callname: Name of the tool to executearguments: JSON object with tool arguments
pub async fn execute_tool(
tool_name: &str,
arguments: &serde_json::Value,
) -> Result<String, String>Executes a tool by name with the given arguments.
Parameters:
tool_name: Name of the tool to executearguments: JSON object with tool-specific arguments
Returns:
Ok(String): Tool execution resultErr(String): Error message
Example:
let args = json!({
"path": "example.txt"
});
let result = execute_tool("read_file", &args).await?;Read the contents of a file.
Arguments:
{
"path": "file/path.txt"
}Returns: File contents as string
Write content to a file (overwrites existing content).
Arguments:
{
"path": "file/path.txt",
"content": "Content to write"
}Returns: Success message
Append content to the end of a file.
Arguments:
{
"path": "file/path.txt",
"content": "Content to append"
}Returns: Success message
Search for text in a file and replace it.
Arguments:
{
"path": "file/path.txt",
"search": "text to find",
"replace": "replacement text"
}Returns: Success message with replacement count
Delete a file or directory.
Arguments:
{
"path": "file/path.txt"
}Returns: Success message
Create a directory (including parent directories).
Arguments:
{
"path": "dir/subdir"
}Returns: Success message
List contents of a directory (non-recursive).
Arguments:
{
"path": "directory/path"
}Returns: List of files and directories
List contents of a directory recursively.
Arguments:
{
"path": "directory/path"
}Returns: Tree structure of files and directories
Execute Python code.
Arguments:
{
"code": "print('Hello, World!')"
}Returns: Standard output and error from execution
Execute bash commands.
Arguments:
{
"command": "ls -la"
}Returns: Command output
Execute Node.js code.
Arguments:
{
"code": "console.log('Hello');"
}Returns: Standard output from execution
Execute Ruby code.
Arguments:
{
"code": "puts 'Hello'"
}Returns: Standard output from execution
Create a development plan in plan.md.
Arguments:
{
"plan": "## Phase 1\n- [ ] Step 1\n- [ ] Step 2"
}Returns: Success message
Update the status of a plan step.
Arguments:
{
"step_number": 1,
"new_status": "completed"
}Returns: Success message
Clear the current plan.
Arguments: None
Returns: Success message
Get git repository status.
Arguments: None
Returns: Git status output
The app module manages application state.
Main application state.
pub struct App {
pub user_input: String,
pub conversation: Vec<String>,
pub status_message: String,
pub tool_logs: Vec<String>,
pub is_executing_tool: bool,
pub current_tool: String,
pub session_start_time: std::time::Instant,
pub tokens_used: u64,
pub total_requests: u64,
pub total_tools_executed: u64,
pub conversation_scroll_position: usize,
pub tool_logs_scroll_position: usize,
pub is_streaming: bool,
pub current_streaming_message: String,
}pub fn new() -> SelfCreates a new App instance with default values.
pub fn add_tool_log(&mut self, log: String)Adds a log entry to the tool logs.
pub fn increment_tokens(&mut self, tokens: u64)
pub fn increment_requests(&mut self)
pub fn increment_tools_executed(&mut self)
pub fn get_session_duration(&self) -> std::time::Duration
pub fn get_usage_summary(&self) -> StringMethods for tracking and reporting usage statistics.
pub fn scroll_conversation_up(&mut self)
pub fn scroll_conversation_down(&mut self)
pub fn scroll_conversation_to_top(&mut self)
pub fn scroll_conversation_to_bottom(&mut self)Methods for managing conversation scroll position.
pub fn start_streaming(&mut self)
pub fn update_streaming_message(&mut self, new_content: &str)
pub fn finish_streaming(&mut self, final_message: String)Methods for managing streaming response state.
The config module handles configuration loading.
Main configuration structure.
pub struct Config {
pub llm: LlmConfig,
}LLM-specific configuration.
pub struct LlmConfig {
pub provider: Option<String>,
pub api_key: String,
pub api_base_url: String,
pub model_name: String,
}pub fn from_file(path: &str) -> Result<Self, io::Error>Loads configuration from a TOML file.
Parameters:
path: Path to config file
Returns:
Ok(Config): Loaded configurationErr(io::Error): File not found or parse error
Example:
let config = Config::from_file("config.toml")?;
println!("Model: {}", config.llm.model_name);The UI module handles terminal rendering.
pub fn ui<B: Backend>(f: &mut Frame<B>, app: &App)Renders the terminal UI for the given app state.
Parameters:
f: Ratatui frame for renderingapp: Current application state
Layout:
- Top 70%: Conversation area
- Next 20%: Tool logs area
- Next line: Status bar
- Bottom: Input area
Scrolling:
- Automatically clamps scroll positions to valid ranges
- Shows indicators when there's more content to scroll
All functions that can fail return Result types:
Ok(value): Successful executionErr(error): Error occurred
Error types:
Box<dyn std::error::Error>: Generic error for API/network issuesString: Simple error messages for tool executionio::Error: File system errors
ConfigandLlmConfig:Clone+Send+SyncMessage:Clone+Send+SyncApp: Not thread-safe (single-threaded UI)
Async functions require a Tokio runtime:
#[tokio::main]
async fn main() {
let result = send_message(&config, &messages).await;
}