Skip to content

Configuration

Mo Abualruz edited this page Dec 3, 2025 · 1 revision

Configuration Guide

Status: ✅ Complete

Last Updated: December 3, 2025


Overview

RiceCoder is highly configurable, allowing you to customize behavior, appearance, and integrations to match your workflow. This guide covers all configuration options, file locations, environment variables, and the configuration hierarchy.

Configuration Hierarchy

RiceCoder loads configuration in priority order (highest to lowest):

  1. Runtime Overrides - CLI flags and environment variables
  2. Project Configuration - .agent/config.yaml in your project
  3. User Configuration - ~/.ricecoder/config.yaml in your home directory
  4. Built-in Defaults - Hardcoded defaults in RiceCoder

Each level overrides the previous one. For example, if you set provider: openai in your user config but provider: ollama in your project config, the project config takes precedence.

Example: Configuration Hierarchy

# ~/.ricecoder/config.yaml (User level)
provider: openai
model: gpt-4
theme: dracula

# .agent/config.yaml (Project level)
provider: ollama
model: mistral

# Result: provider=ollama (project overrides user), model=mistral (project), theme=dracula (from user)

Configuration File Locations

Global Configuration

Location: ~/.ricecoder/config.yaml

Purpose: User-level settings that apply to all projects

Created by: rice init command

Example:

# ~/.ricecoder/config.yaml
provider: openai
api-key: sk-...
theme: dracula
log-level: info

Project Configuration

Location: .agent/config.yaml in your project root

Purpose: Project-specific settings that override global configuration

Created by: rice init in a project directory

Example:

# .agent/config.yaml
provider: ollama
model: mistral
permissions:
  file-write: ask
  file-delete: deny

Configuration File Format

RiceCoder uses YAML format for configuration files. YAML is human-readable and supports:

  • Strings: key: value
  • Numbers: timeout: 30
  • Booleans: enabled: true
  • Lists: models: [gpt-4, gpt-3.5-turbo]
  • Objects: Nested configuration with indentation

YAML Basics

# Comments start with #

# Strings
name: RiceCoder

# Numbers
timeout: 30
port: 8080

# Booleans
enabled: true
debug: false

# Lists
models:
  - gpt-4
  - gpt-3.5-turbo
  - claude-3-opus

# Objects (nested)
provider:
  name: openai
  api-key: sk-...
  timeout: 30

Configuration Options

Provider Configuration

provider

Type: String

Valid Values: openai, anthropic, github-copilot, ollama, custom

Default: openai

Description: Which AI provider to use for code generation and chat

Example:

provider: openai

model

Type: String

Valid Values: Depends on provider (see AI Providers Guide)

Default: gpt-4 (for OpenAI)

Description: Which model to use from the selected provider

Example:

model: gpt-4

api-key

Type: String

Default: None (required for cloud providers)

Description: API key for the selected provider

Security Note: Never commit this to version control. Use environment variables instead.

Example:

api-key: ${OPENAI_API_KEY}  # Use environment variable

Ollama Configuration

ollama-url

Type: String

Default: http://localhost:11434

Description: URL where Ollama is running

Example:

provider: ollama
ollama-url: http://localhost:11434

ollama-timeout

Type: Number (seconds)

Default: 300

Description: Timeout for Ollama requests

Example:

ollama-timeout: 600

UI Configuration

theme

Type: String

Valid Values: dracula, nord, solarized, monokai, gruvbox

Default: dracula

Description: Color theme for the TUI interface

Example:

theme: nord

font-size

Type: Number

Default: 12

Description: Font size for terminal rendering (if supported)

Example:

font-size: 14

Behavior Configuration

log-level

Type: String

Valid Values: error, warn, info, debug, trace

Default: info

Description: Verbosity of logging output

Example:

log-level: debug

auto-approve

Type: Boolean

Default: false

Description: Automatically approve generated code without asking

Security Note: Use with caution; always review generated code

Example:

auto-approve: false

timeout

Type: Number (seconds)

Default: 300

Description: Default timeout for operations

Example:

timeout: 600

Permissions Configuration

permissions

Type: Object

Description: Control what RiceCoder can do

Valid Permissions:

  • file-write: Write files (allow, ask, deny)
  • file-delete: Delete files (allow, ask, deny)
  • shell-execute: Execute shell commands (allow, ask, deny)
  • network-access: Access network (allow, ask, deny)

Default: All set to ask

Example:

permissions:
  file-write: ask
  file-delete: deny
  shell-execute: ask
  network-access: allow

Custom Commands

commands

Type: Object

Description: Define custom shell commands for use in chat

Example:

commands:
  test:
    command: cargo test
    description: Run tests
  build:
    command: cargo build --release
    description: Build release binary
  fmt:
    command: cargo fmt
    description: Format code

Usage in Chat:

r[ > /test

Environment Variables

RiceCoder supports environment variables for sensitive configuration and overrides.

Provider Environment Variables

RICECODER_PROVIDER

Description: Override the configured provider

Example:

export RICECODER_PROVIDER=ollama
rice chat

RICECODER_MODEL

Description: Override the configured model

Example:

export RICECODER_MODEL=gpt-3.5-turbo
rice chat

OPENAI_API_KEY

Description: OpenAI API key (used if api-key not in config)

Example:

export OPENAI_API_KEY=sk-...

ANTHROPIC_API_KEY

Description: Anthropic API key

Example:

export ANTHROPIC_API_KEY=sk-ant-...

GITHUB_TOKEN

Description: GitHub token for GitHub Copilot

Example:

export GITHUB_TOKEN=ghp_...

System Environment Variables

RICECODER_HOME

Description: Override the default RiceCoder home directory

Default: ~/.ricecoder

Example:

export RICECODER_HOME=/custom/path

RICECODER_CONFIG

Description: Override the configuration file location

Default: ~/.ricecoder/config.yaml

Example:

export RICECODER_CONFIG=/path/to/config.yaml

RICECODER_LOG_LEVEL

Description: Override the log level

Valid Values: error, warn, info, debug, trace

Example:

export RICECODER_LOG_LEVEL=debug

RICECODER_TIMEOUT

Description: Override the default timeout (in seconds)

Example:

export RICECODER_TIMEOUT=600

Example Configurations

Minimal Configuration

For quick setup with OpenAI:

# ~/.ricecoder/config.yaml
provider: openai
api-key: ${OPENAI_API_KEY}
model: gpt-4

Local Development with Ollama

For offline development with local models:

# ~/.ricecoder/config.yaml
provider: ollama
ollama-url: http://localhost:11434
model: mistral
log-level: debug

Enterprise Configuration

For team environments with strict permissions:

# .agent/config.yaml
provider: anthropic
api-key: ${ANTHROPIC_API_KEY}
model: claude-3-opus

permissions:
  file-write: ask
  file-delete: deny
  shell-execute: ask
  network-access: allow

log-level: info
timeout: 600

commands:
  test:
    command: cargo test
    description: Run tests
  lint:
    command: cargo clippy
    description: Run linter

Multi-Provider Setup

Switch between providers for different tasks:

# ~/.ricecoder/config.yaml
provider: openai
model: gpt-4

# Project-specific override for local development
# .agent/config.yaml
provider: ollama
model: mistral

Then switch at runtime:

# Use project config (Ollama)
rice chat

# Override with environment variable
RICECODER_PROVIDER=openai rice chat

High-Performance Configuration

For faster responses with cost optimization:

# ~/.ricecoder/config.yaml
provider: openai
model: gpt-3.5-turbo
timeout: 60
log-level: warn

Setting Configuration

Using the CLI

Set a Value

rice config set provider openai
rice config set model gpt-4
rice config set api-key sk-...

Get a Value

rice config get provider
rice config get model

Show All Configuration

rice config show

Reset to Defaults

rice config reset

Editing Configuration Files Directly

Edit ~/.ricecoder/config.yaml or .agent/config.yaml with your text editor:

# Edit global config
nano ~/.ricecoder/config.yaml

# Edit project config
nano .agent/config.yaml

Using Environment Variables

Override configuration at runtime:

export OPENAI_API_KEY=sk-...
export RICECODER_PROVIDER=openai
export RICECODER_MODEL=gpt-4
rice chat

Troubleshooting

"Configuration file not found"

Problem: RiceCoder can't find your configuration file

Solution:

  1. Check if ~/.ricecoder/config.yaml exists:
ls -la ~/.ricecoder/config.yaml
  1. If not, initialize RiceCoder:
rice init
  1. Or specify the config file:
export RICECODER_CONFIG=/path/to/config.yaml

"API key not found"

Problem: RiceCoder can't find your API key

Solution:

  1. Check if API key is set in config:
rice config get api-key
  1. If not, set it:
rice config set api-key sk-...
  1. Or use environment variable:
export OPENAI_API_KEY=sk-...

"Invalid configuration"

Problem: Configuration file has syntax errors

Solution:

  1. Check YAML syntax (indentation, quotes, etc.)
  2. Validate with a YAML validator: https://www.yamllint.com/
  3. Check for common issues:
    • Incorrect indentation (use spaces, not tabs)
    • Missing quotes around strings with special characters
    • Incorrect boolean values (use true/false, not yes/no)

"Configuration not taking effect"

Problem: Changes to configuration don't seem to apply

Solution:

  1. Check configuration hierarchy - project config overrides user config
  2. Check environment variables - they override file configuration
  3. Verify the correct file is being edited:
rice config show  # Shows which config is active
  1. Restart RiceCoder after making changes

"Permission denied" errors

Problem: RiceCoder is blocked from performing an action

Solution:

  1. Check permissions configuration:
rice config get permissions
  1. Update permissions:
permissions:
  file-write: allow
  file-delete: ask
  shell-execute: ask
  1. Or use CLI:
rice config set permissions.file-write allow

"Connection refused" (Ollama)

Problem: Can't connect to Ollama

Solution:

  1. Check if Ollama is running:
curl http://localhost:11434/api/tags
  1. If not, start Ollama:
ollama serve
  1. Check the URL in configuration:
rice config get ollama-url
  1. If using a different URL, update it:
rice config set ollama-url http://your-ollama-url:11434

"Model not found"

Problem: The specified model doesn't exist

Solution:

  1. Check available models:
# For Ollama
ollama list

# For OpenAI
rice config show  # Check model name
  1. Pull the model (Ollama):
ollama pull mistral
  1. Update configuration:
rice config set model mistral

Best Practices

1. Use Environment Variables for Secrets

Never commit API keys to version control:

# ✓ Good
api-key: ${OPENAI_API_KEY}

# ✗ Bad
api-key: sk-...

2. Use Project Configuration for Team Settings

Store team-specific settings in .agent/config.yaml:

# .agent/config.yaml
provider: anthropic
permissions:
  file-delete: deny

3. Document Custom Commands

Add descriptions to custom commands:

commands:
  test:
    command: cargo test
    description: Run all tests
  lint:
    command: cargo clippy
    description: Run linter and check for warnings

4. Use Appropriate Log Levels

  • error: Only errors
  • warn: Errors and warnings
  • info: General information (default)
  • debug: Detailed debugging information
  • trace: Very detailed tracing

5. Set Reasonable Timeouts

Adjust timeouts based on your network and model:

timeout: 300  # 5 minutes for complex operations

6. Review Permissions Carefully

Be explicit about what RiceCoder can do:

permissions:
  file-write: ask      # Ask before writing files
  file-delete: deny    # Never delete files
  shell-execute: ask   # Ask before running commands

See Also


Last updated: December 3, 2025

Clone this wiki locally