-
Notifications
You must be signed in to change notification settings - Fork 0
FAQ
Status: ✅ Complete
Last Updated: December 3, 2025
This page answers common questions about RiceCoder. If you don't find your answer here, check the Troubleshooting Guide or visit our GitHub Discussions.
A: RiceCoder can be installed from source or via Cargo:
# From source
git clone https://github.com/moabualruz/ricecoder.git
cd ricecoder
cargo install --path projects/ricecoder
# Via Cargo
cargo install ricecoder
# Verify installation
rice --versionFor detailed instructions, see Installation & Setup.
A: RiceCoder requires:
- Rust 1.75+ (for building from source)
- Git
- An AI provider API key (OpenAI, Anthropic, etc.) OR Ollama for local models
- 2GB RAM minimum
- 500MB disk space
See Installation & Setup for platform-specific requirements.
A: Yes! RiceCoder runs on all major platforms:
- Windows 10+ (via WSL2 or native)
- macOS 10.15+
- Linux (Ubuntu 20.04+, Fedora 33+, etc.)
See Installation & Setup for platform-specific instructions.
A: If installed via Cargo:
cargo uninstall ricecoderThen remove configuration:
rm -rf ~/.ricecoderA: Use the rice config command:
rice config set api-key YOUR_API_KEYOr set the environment variable:
export RICECODER_API_KEY=YOUR_API_KEYFor detailed configuration, see Configuration Guide.
A: RiceCoder uses a configuration hierarchy:
- Runtime: CLI flags and environment variables
-
Project:
.agent/config.yamlin your project -
User:
~/.ricecoder/config.yamlin your home directory - Defaults: Built-in defaults
See Configuration Guide for details.
A: Use the rice config command:
# Switch to OpenAI
rice config set provider openai
rice config set model gpt-4
# Switch to Anthropic
rice config set provider anthropic
rice config set model claude-3-opus
# Switch to Ollama (local)
rice config set provider ollama
rice config set model mistralSee AI Providers Guide for all supported providers.
A: Currently, RiceCoder uses one provider at a time. However, you can switch providers easily:
rice config set provider openai
rice chat # Uses OpenAI
rice config set provider ollama
rice chat # Uses OllamaSee Configuration Guide for more details.
A: Use the reset command:
rice config resetThis resets to built-in defaults. Your API keys are preserved.
A: Follow these steps:
- Install RiceCoder (see Setup & Installation above)
- Initialize your project:
rice init - Configure your AI provider:
rice config set provider openai - Set your API key:
rice config set api-key YOUR_KEY - Start chatting:
rice chat
See Quick Start Guide for a detailed walkthrough.
A: RiceCoder helps you:
- Chat: Ask questions about your code and get AI-powered responses
- Generate Code: Create code from specifications
- Review Code: Get AI-powered code reviews
- Understand Code: Ask questions about existing code
- Refactor: Get suggestions for improving code
See What is RiceCoder? for more details.
A: Use the rice spec command:
# Create a new spec
rice spec create my-feature
# This creates:
# .agent/specs/my-feature/requirements.md
# .agent/specs/my-feature/design.md
# .agent/specs/my-feature/tasks.md
# Edit the files to define your feature
# Then generate code:
rice gen --spec my-featureSee Spec-Driven Development Guide for detailed instructions.
A: Use the rice gen command:
# Generate from spec
rice gen --spec my-feature
# Preview changes without applying
rice gen --spec my-feature --preview
# Auto-approve changes
rice gen --spec my-feature --auto-approveSee Spec-Driven Development Guide for more details.
A: Use keyboard shortcuts:
- Arrow Keys: Scroll through messages
- Page Up/Down: Scroll faster
- Ctrl+C: Exit chat
- Ctrl+L: Clear screen
- Tab: Switch between input and message area
- Enter: Send message
See TUI Interface Guide for complete keyboard shortcuts.
A: Yes! RiceCoder can be used in automated workflows:
# Generate code in CI
rice gen --spec my-feature --auto-approve
# Run code review
rice review --file src/main.rs
# Check configuration
rice config showSee CLI Commands Reference for all available commands.
A: Follow these steps:
- Install Ollama from https://ollama.ai
- Pull a model:
ollama pull mistral - Configure RiceCoder:
rice config set provider ollama rice config set model mistral rice config set ollama-url http://localhost:11434
- Start chatting:
rice chat
See Local Models Guide for detailed instructions.
A: Popular models include:
- Mistral (7B) - Fast, good quality
- Llama 2 (7B, 13B, 70B) - Versatile
- Neural Chat (7B) - Optimized for chat
- Orca (13B) - Good reasoning
- Dolphin (7B, 13B) - Creative writing
See Local Models Guide for more options.
A: Visit https://ollama.ai and download the installer for your platform:
- Windows: Download and run installer
- macOS: Download and run installer
- Linux: Run installation script
See Local Models Guide for detailed platform-specific instructions.
A: Local models are slower than cloud providers because they run on your machine. Performance depends on:
- GPU: With GPU acceleration, models run 5-10x faster
- Model Size: Smaller models (7B) are faster than larger ones (70B)
- RAM: More RAM allows larger models to run faster
See Local Models Guide for optimization tips.
A: Yes! Ollama runs on Windows 10+ with WSL2 or natively. See Local Models Guide for Windows-specific instructions.
A: RiceCoder supports:
- OpenAI (GPT-4, GPT-3.5-Turbo)
- Anthropic (Claude 3 Opus, Sonnet, Haiku)
- GitHub Copilot (via GitHub API)
- Ollama (local models)
- Other providers (via custom configuration)
See AI Providers Guide for setup instructions for each provider.
A: Follow these steps:
- Visit https://platform.openai.com/account/api-keys
- Sign in or create an account
- Click "Create new secret key"
- Copy the key
- Configure RiceCoder:
rice config set api-key YOUR_KEY
See AI Providers Guide for detailed instructions.
A: Follow these steps:
- Visit https://console.anthropic.com/
- Sign in or create an account
- Navigate to API keys
- Create a new API key
- Configure RiceCoder:
rice config set api-key YOUR_KEY
See AI Providers Guide for detailed instructions.
A: RiceCoder itself is free. Costs depend on your AI provider:
- OpenAI: Pay-as-you-go (typically $0.01-0.10 per request)
- Anthropic: Pay-as-you-go (typically $0.01-0.15 per request)
- GitHub Copilot: $10/month or $100/year
- Ollama: Free (runs locally)
See AI Providers Guide for pricing details.
A: Yes! Use Ollama for local models:
rice config set provider ollama
rice config set model mistral
rice chatSee Local Models Guide for details.
A: Spec-driven development is a methodology where you:
- Write requirements (what to build)
- Design the solution (how to build it)
- Generate implementation (code)
- Validate against requirements
This ensures your code matches your intentions and is well-documented.
See Spec-Driven Development Guide for details.
A: A good spec includes:
- Requirements: User stories with acceptance criteria
- Design: Architecture and data models
- Tasks: Implementation tasks with dependencies
See Spec-Driven Development Guide for best practices and examples.
A: Yes! You can:
- Create a spec for existing code
- Document the current behavior
- Use specs to plan improvements or refactoring
See Spec-Driven Development Guide for examples.
A: RiceCoder helps by:
- Generating code from specs
- Showing diffs for review
- Validating against acceptance criteria
You can also manually verify by checking that all acceptance criteria are met.
See Spec-Driven Development Guide for details.
A: Performance depends on several factors:
- AI Provider: Cloud providers are faster than local models
- Model Size: Larger models are slower but more capable
- Network: Slow internet affects cloud providers
- System Resources: Low RAM or CPU affects local models
See Troubleshooting Guide for optimization tips.
A: Try these optimizations:
- Use a faster model: Switch to a smaller or faster model
- Use cloud providers: OpenAI/Anthropic are faster than local models
- Optimize your system: Close other applications, free up RAM
- Use GPU acceleration: For Ollama, enable GPU support
See Local Models Guide for detailed optimization tips.
A: Memory usage depends on:
- AI Provider: Cloud providers use minimal local memory
- Model Size: Local models use 4-70GB depending on size
- Chat History: Longer conversations use more memory
Typical usage:
- Cloud providers: 100-500MB
- Local models (7B): 4-8GB
- Local models (70B): 40-70GB
A: Set your API key:
rice config set api-key YOUR_API_KEYOr use environment variable:
export RICECODER_API_KEY=YOUR_API_KEYSee Troubleshooting Guide for more details.
A: Make sure Ollama is running:
ollama serveThen verify the connection:
rice config set ollama-url http://localhost:11434
rice chatSee Troubleshooting Guide for more details.
A: Pull the model first:
ollama pull mistralThen configure RiceCoder:
rice config set model mistralSee Troubleshooting Guide for more details.
A: Try these steps:
- Check system resources (RAM, CPU)
- Restart RiceCoder
- Check logs:
~/.ricecoder/logs/ - Try a different model or provider
See Troubleshooting Guide for detailed troubleshooting steps.
A: Configuration files are located at:
-
Global:
~/.ricecoder/config.yaml -
Project:
.agent/config.yaml
Check both locations:
cat ~/.ricecoder/config.yaml
cat .agent/config.yamlSee Configuration Guide for details.
A: RiceCoder integrates with:
- VS Code: Via Kiro IDE extension
- Vim/Neovim: Via CLI commands
- Emacs: Via CLI commands
- Other IDEs: Via CLI commands
See Architecture Overview for integration details.
A: Yes! RiceCoder supports:
- Shared specs: Store specs in version control
- Shared configuration: Use project-level config
- Code review: Generate and review code together
See Contributing Guide for team workflows.
A: Yes! RiceCoder is extensible via:
-
Custom commands: Define in
.agent/config.yaml - Custom providers: Implement provider interface
- Plugins: (Coming in Phase 2)
See Architecture Overview for extension details.
A: See Contributing Guide for:
- Development setup
- Code style guidelines
- Testing requirements
- Pull request process
A: Resources available:
- Documentation: RiceCoder Wiki
- Quick Start: Quick Start Guide
- Troubleshooting: Troubleshooting Guide
- GitHub Issues: Report bugs
- GitHub Discussions: Ask questions
A: Report bugs on GitHub:
- Visit https://github.com/moabualruz/ricecoder/issues
- Click "New Issue"
- Describe the bug with:
- Steps to reproduce
- Expected behavior
- Actual behavior
- System information (OS, Rust version, etc.)
A: Request features on GitHub:
- Visit https://github.com/moabualruz/ricecoder/discussions
- Click "New Discussion"
- Describe the feature with:
- Use case
- Expected behavior
- Why it's useful
- Quick Start Guide - Get started in 5 minutes
- Configuration Guide - Configure RiceCoder
- CLI Commands Reference - All available commands
- Troubleshooting Guide - Solve common problems
- Spec-Driven Development Guide - Master specs
- AI Providers Guide - Set up AI providers
- Local Models Guide - Use Ollama for local models
- TUI Interface Guide - Navigate the interface
Last updated: December 3, 2025