Skip to content
Mo Abualruz edited this page Dec 4, 2025 · 2 revisions

Frequently Asked Questions (FAQ)

Status: ✅ Complete

Last Updated: December 3, 2025


Overview

This page answers common questions about RiceCoder. If you don't find your answer here, check the Troubleshooting Guide or visit our GitHub Discussions.


Setup & Installation

Q: How do I install RiceCoder?

A: RiceCoder can be installed from source or via Cargo:

# From source
git clone https://github.com/moabualruz/ricecoder.git
cd ricecoder
cargo install --path projects/ricecoder

# Via Cargo
cargo install ricecoder

# Verify installation
rice --version

For detailed instructions, see Installation & Setup.

Q: What are the system requirements?

A: RiceCoder requires:

  • Rust 1.75+ (for building from source)
  • Git
  • An AI provider API key (OpenAI, Anthropic, etc.) OR Ollama for local models
  • 2GB RAM minimum
  • 500MB disk space

See Installation & Setup for platform-specific requirements.

Q: Can I use RiceCoder on Windows/macOS/Linux?

A: Yes! RiceCoder runs on all major platforms:

  • Windows 10+ (via WSL2 or native)
  • macOS 10.15+
  • Linux (Ubuntu 20.04+, Fedora 33+, etc.)

See Installation & Setup for platform-specific instructions.

Q: How do I uninstall RiceCoder?

A: If installed via Cargo:

cargo uninstall ricecoder

Then remove configuration:

rm -rf ~/.ricecoder

Configuration

Q: How do I set up my API key?

A: Use the rice config command:

rice config set api-key YOUR_API_KEY

Or set the environment variable:

export RICECODER_API_KEY=YOUR_API_KEY

For detailed configuration, see Configuration Guide.

Q: Where are configuration files stored?

A: RiceCoder uses a configuration hierarchy:

  1. Runtime: CLI flags and environment variables
  2. Project: .agent/config.yaml in your project
  3. User: ~/.ricecoder/config.yaml in your home directory
  4. Defaults: Built-in defaults

See Configuration Guide for details.

Q: How do I switch between different AI providers?

A: Use the rice config command:

# Switch to OpenAI
rice config set provider openai
rice config set model gpt-4

# Switch to Anthropic
rice config set provider anthropic
rice config set model claude-3-opus

# Switch to Ollama (local)
rice config set provider ollama
rice config set model mistral

See AI Providers Guide for all supported providers.

Q: Can I use multiple AI providers?

A: Currently, RiceCoder uses one provider at a time. However, you can switch providers easily:

rice config set provider openai
rice chat  # Uses OpenAI

rice config set provider ollama
rice chat  # Uses Ollama

See Configuration Guide for more details.

Q: How do I reset configuration to defaults?

A: Use the reset command:

rice config reset

This resets to built-in defaults. Your API keys are preserved.


Usage

Q: How do I start using RiceCoder?

A: Follow these steps:

  1. Install RiceCoder (see Setup & Installation above)
  2. Initialize your project: rice init
  3. Configure your AI provider: rice config set provider openai
  4. Set your API key: rice config set api-key YOUR_KEY
  5. Start chatting: rice chat

See Quick Start Guide for a detailed walkthrough.

Q: What can I do with RiceCoder?

A: RiceCoder helps you:

  • Chat: Ask questions about your code and get AI-powered responses
  • Generate Code: Create code from specifications
  • Review Code: Get AI-powered code reviews
  • Understand Code: Ask questions about existing code
  • Refactor: Get suggestions for improving code

See What is RiceCoder? for more details.

Q: How do I create a specification?

A: Use the rice spec command:

# Create a new spec
rice spec create my-feature

# This creates:
# .agent/specs/my-feature/requirements.md
# .agent/specs/my-feature/design.md
# .agent/specs/my-feature/tasks.md

# Edit the files to define your feature
# Then generate code:
rice gen --spec my-feature

See Spec-Driven Development Guide for detailed instructions.

Q: How do I generate code from a specification?

A: Use the rice gen command:

# Generate from spec
rice gen --spec my-feature

# Preview changes without applying
rice gen --spec my-feature --preview

# Auto-approve changes
rice gen --spec my-feature --auto-approve

See Spec-Driven Development Guide for more details.

Q: How do I navigate the TUI (Terminal User Interface)?

A: Use keyboard shortcuts:

  • Arrow Keys: Scroll through messages
  • Page Up/Down: Scroll faster
  • Ctrl+C: Exit chat
  • Ctrl+L: Clear screen
  • Tab: Switch between input and message area
  • Enter: Send message

See TUI Interface Guide for complete keyboard shortcuts.

Q: Can I use RiceCoder in a CI/CD pipeline?

A: Yes! RiceCoder can be used in automated workflows:

# Generate code in CI
rice gen --spec my-feature --auto-approve

# Run code review
rice review --file src/main.rs

# Check configuration
rice config show

See CLI Commands Reference for all available commands.


Local Models (Ollama)

Q: How do I use local models with Ollama?

A: Follow these steps:

  1. Install Ollama from https://ollama.ai
  2. Pull a model: ollama pull mistral
  3. Configure RiceCoder:
    rice config set provider ollama
    rice config set model mistral
    rice config set ollama-url http://localhost:11434
  4. Start chatting: rice chat

See Local Models Guide for detailed instructions.

Q: What models are available for Ollama?

A: Popular models include:

  • Mistral (7B) - Fast, good quality
  • Llama 2 (7B, 13B, 70B) - Versatile
  • Neural Chat (7B) - Optimized for chat
  • Orca (13B) - Good reasoning
  • Dolphin (7B, 13B) - Creative writing

See Local Models Guide for more options.

Q: How do I install Ollama?

A: Visit https://ollama.ai and download the installer for your platform:

  • Windows: Download and run installer
  • macOS: Download and run installer
  • Linux: Run installation script

See Local Models Guide for detailed platform-specific instructions.

Q: Why is Ollama slow?

A: Local models are slower than cloud providers because they run on your machine. Performance depends on:

  • GPU: With GPU acceleration, models run 5-10x faster
  • Model Size: Smaller models (7B) are faster than larger ones (70B)
  • RAM: More RAM allows larger models to run faster

See Local Models Guide for optimization tips.

Q: Can I use Ollama on Windows?

A: Yes! Ollama runs on Windows 10+ with WSL2 or natively. See Local Models Guide for Windows-specific instructions.


AI Providers

Q: Which AI providers does RiceCoder support?

A: RiceCoder supports:

  • OpenAI (GPT-4, GPT-3.5-Turbo)
  • Anthropic (Claude 3 Opus, Sonnet, Haiku)
  • GitHub Copilot (via GitHub API)
  • Ollama (local models)
  • Other providers (via custom configuration)

See AI Providers Guide for setup instructions for each provider.

Q: How do I get an API key for OpenAI?

A: Follow these steps:

  1. Visit https://platform.openai.com/account/api-keys
  2. Sign in or create an account
  3. Click "Create new secret key"
  4. Copy the key
  5. Configure RiceCoder: rice config set api-key YOUR_KEY

See AI Providers Guide for detailed instructions.

Q: How do I get an API key for Anthropic?

A: Follow these steps:

  1. Visit https://console.anthropic.com/
  2. Sign in or create an account
  3. Navigate to API keys
  4. Create a new API key
  5. Configure RiceCoder: rice config set api-key YOUR_KEY

See AI Providers Guide for detailed instructions.

Q: How much does it cost to use RiceCoder?

A: RiceCoder itself is free. Costs depend on your AI provider:

  • OpenAI: Pay-as-you-go (typically $0.01-0.10 per request)
  • Anthropic: Pay-as-you-go (typically $0.01-0.15 per request)
  • GitHub Copilot: $10/month or $100/year
  • Ollama: Free (runs locally)

See AI Providers Guide for pricing details.

Q: Can I use RiceCoder without an API key?

A: Yes! Use Ollama for local models:

rice config set provider ollama
rice config set model mistral
rice chat

See Local Models Guide for details.


Spec-Driven Development

Q: What is spec-driven development?

A: Spec-driven development is a methodology where you:

  1. Write requirements (what to build)
  2. Design the solution (how to build it)
  3. Generate implementation (code)
  4. Validate against requirements

This ensures your code matches your intentions and is well-documented.

See Spec-Driven Development Guide for details.

Q: How do I write a good specification?

A: A good spec includes:

  1. Requirements: User stories with acceptance criteria
  2. Design: Architecture and data models
  3. Tasks: Implementation tasks with dependencies

See Spec-Driven Development Guide for best practices and examples.

Q: Can I use specs for existing code?

A: Yes! You can:

  1. Create a spec for existing code
  2. Document the current behavior
  3. Use specs to plan improvements or refactoring

See Spec-Driven Development Guide for examples.

Q: How do I validate that my code matches the spec?

A: RiceCoder helps by:

  1. Generating code from specs
  2. Showing diffs for review
  3. Validating against acceptance criteria

You can also manually verify by checking that all acceptance criteria are met.

See Spec-Driven Development Guide for details.


Performance & Optimization

Q: Why is RiceCoder slow?

A: Performance depends on several factors:

  • AI Provider: Cloud providers are faster than local models
  • Model Size: Larger models are slower but more capable
  • Network: Slow internet affects cloud providers
  • System Resources: Low RAM or CPU affects local models

See Troubleshooting Guide for optimization tips.

Q: How can I make RiceCoder faster?

A: Try these optimizations:

  1. Use a faster model: Switch to a smaller or faster model
  2. Use cloud providers: OpenAI/Anthropic are faster than local models
  3. Optimize your system: Close other applications, free up RAM
  4. Use GPU acceleration: For Ollama, enable GPU support

See Local Models Guide for detailed optimization tips.

Q: How much memory does RiceCoder use?

A: Memory usage depends on:

  • AI Provider: Cloud providers use minimal local memory
  • Model Size: Local models use 4-70GB depending on size
  • Chat History: Longer conversations use more memory

Typical usage:

  • Cloud providers: 100-500MB
  • Local models (7B): 4-8GB
  • Local models (70B): 40-70GB

Troubleshooting

Q: I get "API key not found" error

A: Set your API key:

rice config set api-key YOUR_API_KEY

Or use environment variable:

export RICECODER_API_KEY=YOUR_API_KEY

See Troubleshooting Guide for more details.

Q: I get "Connection refused" error with Ollama

A: Make sure Ollama is running:

ollama serve

Then verify the connection:

rice config set ollama-url http://localhost:11434
rice chat

See Troubleshooting Guide for more details.

Q: I get "Model not found" error

A: Pull the model first:

ollama pull mistral

Then configure RiceCoder:

rice config set model mistral

See Troubleshooting Guide for more details.

Q: RiceCoder crashes or freezes

A: Try these steps:

  1. Check system resources (RAM, CPU)
  2. Restart RiceCoder
  3. Check logs: ~/.ricecoder/logs/
  4. Try a different model or provider

See Troubleshooting Guide for detailed troubleshooting steps.

Q: I can't find my configuration file

A: Configuration files are located at:

  • Global: ~/.ricecoder/config.yaml
  • Project: .agent/config.yaml

Check both locations:

cat ~/.ricecoder/config.yaml
cat .agent/config.yaml

See Configuration Guide for details.


Integration & Advanced

Q: Can I use RiceCoder with my IDE?

A: RiceCoder integrates with:

  • VS Code: Via Kiro IDE extension
  • Vim/Neovim: Via CLI commands
  • Emacs: Via CLI commands
  • Other IDEs: Via CLI commands

See Architecture Overview for integration details.

Q: Can I use RiceCoder in a team?

A: Yes! RiceCoder supports:

  • Shared specs: Store specs in version control
  • Shared configuration: Use project-level config
  • Code review: Generate and review code together

See Contributing Guide for team workflows.

Q: Can I extend RiceCoder?

A: Yes! RiceCoder is extensible via:

  • Custom commands: Define in .agent/config.yaml
  • Custom providers: Implement provider interface
  • Plugins: (Coming in Phase 2)

See Architecture Overview for extension details.

Q: How do I contribute to RiceCoder?

A: See Contributing Guide for:

  • Development setup
  • Code style guidelines
  • Testing requirements
  • Pull request process

Getting Help

Q: Where can I get help?

A: Resources available:

Q: How do I report a bug?

A: Report bugs on GitHub:

  1. Visit https://github.com/moabualruz/ricecoder/issues
  2. Click "New Issue"
  3. Describe the bug with:
    • Steps to reproduce
    • Expected behavior
    • Actual behavior
    • System information (OS, Rust version, etc.)

Q: How do I request a feature?

A: Request features on GitHub:

  1. Visit https://github.com/moabualruz/ricecoder/discussions
  2. Click "New Discussion"
  3. Describe the feature with:
    • Use case
    • Expected behavior
    • Why it's useful

See Also


Last updated: December 3, 2025

Clone this wiki locally