# Configuration Guide **Status**: ✅ Complete **Last Updated**: December 3, 2025 --- ## Overview RiceCoder is highly configurable, allowing you to customize behavior, appearance, and integrations to match your workflow. This guide covers all configuration options, file locations, environment variables, and the configuration hierarchy. ## Configuration Hierarchy RiceCoder loads configuration in priority order (highest to lowest): 1. **Runtime Overrides** - CLI flags and environment variables 2. **Project Configuration** - `.agent/config.yaml` in your project 3. **User Configuration** - `~/.ricecoder/config.yaml` in your home directory 4. **Built-in Defaults** - Hardcoded defaults in RiceCoder Each level overrides the previous one. For example, if you set `provider: openai` in your user config but `provider: ollama` in your project config, the project config takes precedence. ### Example: Configuration Hierarchy ```yaml # ~/.ricecoder/config.yaml (User level) provider: openai model: gpt-4 theme: dracula # .agent/config.yaml (Project level) provider: ollama model: mistral # Result: provider=ollama (project overrides user), model=mistral (project), theme=dracula (from user) ``` ## Configuration File Locations ### Global Configuration **Location**: `~/.ricecoder/config.yaml` **Purpose**: User-level settings that apply to all projects **Created by**: `rice init` command **Example**: ```yaml # ~/.ricecoder/config.yaml provider: openai api-key: sk-... theme: dracula log-level: info ``` ### Project Configuration **Location**: `.agent/config.yaml` in your project root **Purpose**: Project-specific settings that override global configuration **Created by**: `rice init` in a project directory **Example**: ```yaml # .agent/config.yaml provider: ollama model: mistral permissions: file-write: ask file-delete: deny ``` ## Configuration File Format RiceCoder uses YAML format for configuration files. YAML is human-readable and supports: - **Strings**: `key: value` - **Numbers**: `timeout: 30` - **Booleans**: `enabled: true` - **Lists**: `models: [gpt-4, gpt-3.5-turbo]` - **Objects**: Nested configuration with indentation ### YAML Basics ```yaml # Comments start with # # Strings name: RiceCoder # Numbers timeout: 30 port: 8080 # Booleans enabled: true debug: false # Lists models: - gpt-4 - gpt-3.5-turbo - claude-3-opus # Objects (nested) provider: name: openai api-key: sk-... timeout: 30 ``` ## Configuration Options ### Provider Configuration #### `provider` **Type**: String **Valid Values**: `openai`, `anthropic`, `github-copilot`, `ollama`, `custom` **Default**: `openai` **Description**: Which AI provider to use for code generation and chat **Example**: ```yaml provider: openai ``` #### `model` **Type**: String **Valid Values**: Depends on provider (see [AI Providers Guide](./AI-Providers.md)) **Default**: `gpt-4` (for OpenAI) **Description**: Which model to use from the selected provider **Example**: ```yaml model: gpt-4 ``` #### `api-key` **Type**: String **Default**: None (required for cloud providers) **Description**: API key for the selected provider **Security Note**: Never commit this to version control. Use environment variables instead. **Example**: ```yaml api-key: ${OPENAI_API_KEY} # Use environment variable ``` ### Ollama Configuration #### `ollama-url` **Type**: String **Default**: `http://localhost:11434` **Description**: URL where Ollama is running **Example**: ```yaml provider: ollama ollama-url: http://localhost:11434 ``` #### `ollama-timeout` **Type**: Number (seconds) **Default**: `300` **Description**: Timeout for Ollama requests **Example**: ```yaml ollama-timeout: 600 ``` ### UI Configuration #### `theme` **Type**: String **Valid Values**: `dracula`, `nord`, `solarized`, `monokai`, `gruvbox` **Default**: `dracula` **Description**: Color theme for the TUI interface **Example**: ```yaml theme: nord ``` #### `font-size` **Type**: Number **Default**: `12` **Description**: Font size for terminal rendering (if supported) **Example**: ```yaml font-size: 14 ``` ### Behavior Configuration #### `log-level` **Type**: String **Valid Values**: `error`, `warn`, `info`, `debug`, `trace` **Default**: `info` **Description**: Verbosity of logging output **Example**: ```yaml log-level: debug ``` #### `auto-approve` **Type**: Boolean **Default**: `false` **Description**: Automatically approve generated code without asking **Security Note**: Use with caution; always review generated code **Example**: ```yaml auto-approve: false ``` #### `timeout` **Type**: Number (seconds) **Default**: `300` **Description**: Default timeout for operations **Example**: ```yaml timeout: 600 ``` ### Permissions Configuration #### `permissions` **Type**: Object **Description**: Control what RiceCoder can do **Valid Permissions**: - `file-write`: Write files (`allow`, `ask`, `deny`) - `file-delete`: Delete files (`allow`, `ask`, `deny`) - `shell-execute`: Execute shell commands (`allow`, `ask`, `deny`) - `network-access`: Access network (`allow`, `ask`, `deny`) **Default**: All set to `ask` **Example**: ```yaml permissions: file-write: ask file-delete: deny shell-execute: ask network-access: allow ``` ### Custom Commands #### `commands` **Type**: Object **Description**: Define custom shell commands for use in chat **Example**: ```yaml commands: test: command: cargo test description: Run tests build: command: cargo build --release description: Build release binary fmt: command: cargo fmt description: Format code ``` **Usage in Chat**: ```bash r[ > /test ``` ## Environment Variables RiceCoder supports environment variables for sensitive configuration and overrides. ### Provider Environment Variables #### `RICECODER_PROVIDER` **Description**: Override the configured provider **Example**: ```bash export RICECODER_PROVIDER=ollama rice chat ``` #### `RICECODER_MODEL` **Description**: Override the configured model **Example**: ```bash export RICECODER_MODEL=gpt-3.5-turbo rice chat ``` #### `OPENAI_API_KEY` **Description**: OpenAI API key (used if `api-key` not in config) **Example**: ```bash export OPENAI_API_KEY=sk-... ``` #### `ANTHROPIC_API_KEY` **Description**: Anthropic API key **Example**: ```bash export ANTHROPIC_API_KEY=sk-ant-... ``` #### `GITHUB_TOKEN` **Description**: GitHub token for GitHub Copilot **Example**: ```bash export GITHUB_TOKEN=ghp_... ``` ### System Environment Variables #### `RICECODER_HOME` **Description**: Override the default RiceCoder home directory **Default**: `~/.ricecoder` **Example**: ```bash export RICECODER_HOME=/custom/path ``` #### `RICECODER_CONFIG` **Description**: Override the configuration file location **Default**: `~/.ricecoder/config.yaml` **Example**: ```bash export RICECODER_CONFIG=/path/to/config.yaml ``` #### `RICECODER_LOG_LEVEL` **Description**: Override the log level **Valid Values**: `error`, `warn`, `info`, `debug`, `trace` **Example**: ```bash export RICECODER_LOG_LEVEL=debug ``` #### `RICECODER_TIMEOUT` **Description**: Override the default timeout (in seconds) **Example**: ```bash export RICECODER_TIMEOUT=600 ``` ## Example Configurations ### Minimal Configuration For quick setup with OpenAI: ```yaml # ~/.ricecoder/config.yaml provider: openai api-key: ${OPENAI_API_KEY} model: gpt-4 ``` ### Local Development with Ollama For offline development with local models: ```yaml # ~/.ricecoder/config.yaml provider: ollama ollama-url: http://localhost:11434 model: mistral log-level: debug ``` ### Enterprise Configuration For team environments with strict permissions: ```yaml # .agent/config.yaml provider: anthropic api-key: ${ANTHROPIC_API_KEY} model: claude-3-opus permissions: file-write: ask file-delete: deny shell-execute: ask network-access: allow log-level: info timeout: 600 commands: test: command: cargo test description: Run tests lint: command: cargo clippy description: Run linter ``` ### Multi-Provider Setup Switch between providers for different tasks: ```yaml # ~/.ricecoder/config.yaml provider: openai model: gpt-4 # Project-specific override for local development # .agent/config.yaml provider: ollama model: mistral ``` Then switch at runtime: ```bash # Use project config (Ollama) rice chat # Override with environment variable RICECODER_PROVIDER=openai rice chat ``` ### High-Performance Configuration For faster responses with cost optimization: ```yaml # ~/.ricecoder/config.yaml provider: openai model: gpt-3.5-turbo timeout: 60 log-level: warn ``` ## Setting Configuration ### Using the CLI #### Set a Value ```bash rice config set provider openai rice config set model gpt-4 rice config set api-key sk-... ``` #### Get a Value ```bash rice config get provider rice config get model ``` #### Show All Configuration ```bash rice config show ``` #### Reset to Defaults ```bash rice config reset ``` ### Editing Configuration Files Directly Edit `~/.ricecoder/config.yaml` or `.agent/config.yaml` with your text editor: ```bash # Edit global config nano ~/.ricecoder/config.yaml # Edit project config nano .agent/config.yaml ``` ### Using Environment Variables Override configuration at runtime: ```bash export OPENAI_API_KEY=sk-... export RICECODER_PROVIDER=openai export RICECODER_MODEL=gpt-4 rice chat ``` ## Troubleshooting ### "Configuration file not found" **Problem**: RiceCoder can't find your configuration file **Solution**: 1. Check if `~/.ricecoder/config.yaml` exists: ```bash ls -la ~/.ricecoder/config.yaml ``` 2. If not, initialize RiceCoder: ```bash rice init ``` 3. Or specify the config file: ```bash export RICECODER_CONFIG=/path/to/config.yaml ``` ### "API key not found" **Problem**: RiceCoder can't find your API key **Solution**: 1. Check if API key is set in config: ```bash rice config get api-key ``` 2. If not, set it: ```bash rice config set api-key sk-... ``` 3. Or use environment variable: ```bash export OPENAI_API_KEY=sk-... ``` ### "Invalid configuration" **Problem**: Configuration file has syntax errors **Solution**: 1. Check YAML syntax (indentation, quotes, etc.) 2. Validate with a YAML validator: https://www.yamllint.com/ 3. Check for common issues: - Incorrect indentation (use spaces, not tabs) - Missing quotes around strings with special characters - Incorrect boolean values (use `true`/`false`, not `yes`/`no`) ### "Configuration not taking effect" **Problem**: Changes to configuration don't seem to apply **Solution**: 1. Check configuration hierarchy - project config overrides user config 2. Check environment variables - they override file configuration 3. Verify the correct file is being edited: ```bash rice config show # Shows which config is active ``` 4. Restart RiceCoder after making changes ### "Permission denied" errors **Problem**: RiceCoder is blocked from performing an action **Solution**: 1. Check permissions configuration: ```bash rice config get permissions ``` 2. Update permissions: ```yaml permissions: file-write: allow file-delete: ask shell-execute: ask ``` 3. Or use CLI: ```bash rice config set permissions.file-write allow ``` ### "Connection refused" (Ollama) **Problem**: Can't connect to Ollama **Solution**: 1. Check if Ollama is running: ```bash curl http://localhost:11434/api/tags ``` 2. If not, start Ollama: ```bash ollama serve ``` 3. Check the URL in configuration: ```bash rice config get ollama-url ``` 4. If using a different URL, update it: ```bash rice config set ollama-url http://your-ollama-url:11434 ``` ### "Model not found" **Problem**: The specified model doesn't exist **Solution**: 1. Check available models: ```bash # For Ollama ollama list # For OpenAI rice config show # Check model name ``` 2. Pull the model (Ollama): ```bash ollama pull mistral ``` 3. Update configuration: ```bash rice config set model mistral ``` ## Best Practices ### 1. Use Environment Variables for Secrets Never commit API keys to version control: ```yaml # ✓ Good api-key: ${OPENAI_API_KEY} # ✗ Bad api-key: sk-... ``` ### 2. Use Project Configuration for Team Settings Store team-specific settings in `.agent/config.yaml`: ```yaml # .agent/config.yaml provider: anthropic permissions: file-delete: deny ``` ### 3. Document Custom Commands Add descriptions to custom commands: ```yaml commands: test: command: cargo test description: Run all tests lint: command: cargo clippy description: Run linter and check for warnings ``` ### 4. Use Appropriate Log Levels - `error`: Only errors - `warn`: Errors and warnings - `info`: General information (default) - `debug`: Detailed debugging information - `trace`: Very detailed tracing ### 5. Set Reasonable Timeouts Adjust timeouts based on your network and model: ```yaml timeout: 300 # 5 minutes for complex operations ``` ### 6. Review Permissions Carefully Be explicit about what RiceCoder can do: ```yaml permissions: file-write: ask # Ask before writing files file-delete: deny # Never delete files shell-execute: ask # Ask before running commands ``` ## See Also - [Quick Start Guide](./Quick-Start.md) - Get started quickly - [CLI Commands](./CLI-Commands.md) - All available commands - [AI Providers Guide](./AI-Providers.md) - Provider setup and comparison - [Local Models Guide](./Local-Models.md) - Using Ollama for local models - [Troubleshooting Guide](./Troubleshooting.md) - Common issues and solutions --- *Last updated: December 3, 2025*