Quantum Code is an enterprise-grade AI orchestration platform that enhances code analysis through multi-model intelligence. Instead of relying on a single AI model, it orchestrates multiple AI providers (OpenAI GPT, Anthropic Claude, Google Gemini) to deliver more accurate, comprehensive code reviews and insights.
- Parallel AI Analysis: Runs multiple AI models simultaneously
- Consensus Validation: Cross-verifies results across different models
- Security-First: OWASP Top 10 vulnerability detection
- Enterprise Ready: Production-grade architecture
pip install quantum-codegit clone https://github.com/Codewithevilxd/quantum-code.git
cd quantum-code
pip install -e .# Check CLI
quantum --help
# Or using Python module
python -m quantum_code.cli --helpQuantum Code requires API keys from at least one AI provider. Create a .env file in your project directory or home folder.
# In your project directory
cp .env.example .env
# Or in your home directory for global use
# ~/.quantum_code/.envOPENAI_API_KEY=sk-proj-your-openai-key-hereANTHROPIC_API_KEY=sk-ant-api03-your-anthropic-key-hereGEMINI_API_KEY=AIzaSy-your-gemini-key-here# Primary providers
OPENAI_API_KEY=sk-proj-...
ANTHROPIC_API_KEY=sk-ant-api03-...
GEMINI_API_KEY=AIzaSy-...
# Optional: Enterprise providers
AZURE_OPENAI_API_KEY=...
AWS_ACCESS_KEY_ID=...
GOOGLE_CLOUD_PROJECT=your-project# Test configuration
python -c "from quantum_code.settings import settings; print('Keys loaded successfully')"# Review a single file
quantum src/main.py
# Review entire directory
quantum src/
# Review specific file types
quantum src/ --include "*.py" "*.js"# Use specific AI model
quantum src/ --model claude-sonnet
# Use multiple models
quantum src/ --models gpt-4o,claude-sonnet,gemini-pro
# JSON output for CI/CD
quantum src/ --json > review-results.json
# Verbose logging
quantum src/ -v
# Specify project root
quantum src/ --base-path /path/to/projectAdd to your ~/.claude.json:
{
"mcpServers": {
"quantum": {
"command": "python",
"args": ["-m", "quantum_code.server"]
}
}
}# Restart Claude Code to load MCP server
claudeCan you quantum codereview this authentication module?
Can you quantum compare these two architecture approaches?
Can you quantum debate the best testing strategy?
Comprehensive code analysis covering:
- Quality: Code structure, readability, maintainability
- Security: OWASP Top 10 vulnerabilities, injection risks
- Performance: Algorithm efficiency, resource usage
- Architecture: Design patterns, scalability concerns
Usage:
# CLI
quantum src/auth.py
# Claude Code
"quantum codereview this authentication module for security issues"Interactive AI assistance with:
- Context Awareness: Repository and file understanding
- Multi-turn Conversations: Maintains conversation history
- Code Examples: Provides relevant code snippets
Usage:
# Claude Code
"quantum chat how does this authentication flow work?"
"quantum chat suggest improvements for this API design"Side-by-side analysis of:
- Architecture Approaches: Different design patterns
- Technology Choices: Framework comparisons
- Implementation Strategies: Various solutions
Usage:
# Claude Code
"quantum compare REST API vs GraphQL for this use case"
"quantum compare different state management approaches"Advanced consensus building:
- Multi-agent Debate: Models argue different perspectives
- Step 1: Independent analysis from each model
- Step 2: Cross-examination and critique
- Final Consensus: Weighted recommendation
Usage:
# Claude Code
"quantum debate the best database choice for this application"
"quantum debate microservices vs monolith architecture"Use short names instead of full model identifiers:
| Alias | Full Model | Provider |
|---|---|---|
mini |
gpt-4o-mini | OpenAI |
gpt |
gpt-4o | OpenAI |
sonnet |
claude-sonnet-4.5 | Anthropic |
haiku |
claude-haiku-4.5 | Anthropic |
gemini |
gemini-pro | |
flash |
gemini-flash |
Create ~/.quantum_code/config.yaml:
version: "1.0"
models:
my-custom-model:
litellm_model: openai/gpt-4o
aliases:
- custom
notes: "My custom GPT-4o configuration"# 1. Write your code
# 2. Run comprehensive review
quantum src/new_feature.py --models claude-sonnet,gpt-4o
# 3. Address issues found
# 4. Run final verification
quantum src/new_feature.py# In Claude Code:
"quantum compare using Redis vs PostgreSQL for session storage"
"quantum debate the best caching strategy for this high-traffic API"# Security-focused review
quantum src/auth.py src/api.py --focus security
# OWASP compliance check
quantum src/ --security-scan# .github/workflows/code-review.yml
name: AI Code Review
on: [pull_request]
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Quantum Code Analysis
run: |
pip install quantum-code
quantum src/ --json --models claude-sonnet,gpt-4o > review.json# Check if .env file exists
ls -la .env
# Verify key format
cat .env | grep OPENAI_API_KEY
# Test key loading
python -c "from quantum_code.settings import settings; print(settings.openai_api_key[:10] + '...')"# Check available models
quantum --list-models
# Update to supported model
quantum src/ --model gpt-4o-mini# Test MCP server directly
python -m quantum_code.server
# Check Claude Code config
cat ~/.claude.json
# Restart Claude Code completely# Use fewer models for faster results
quantum src/ --models gpt-4o-mini,gemini-flash
# Use single model for quick checks
quantum src/ --model gemini-flash🔍 Code Quality: 8.5/10
✅ Good separation of concerns
✅ Clear naming conventions
⚠️ Consider adding input validation
🛡️ Security Score: 9.2/10
✅ No SQL injection vulnerabilities
✅ Proper authentication checks
⚠️ Add rate limiting for API endpoints
⚡ Performance: 7.8/10
✅ Efficient algorithms used
⚠️ Consider caching for frequent queries
- High Confidence: All models agree
- Medium Confidence: Majority agreement with minor differences
- Low Confidence: Significant disagreement (requires human review)
- Use multiple models for important reviews
- Set up CI/CD integration for automated reviews
- Configure model aliases for team consistency
- Review consensus results carefully
- Start with single model for quick feedback
- Use compare mode for architecture decisions
- Leverage chat mode for implementation guidance
- Run security scans before deployment
- Use appropriate models for task complexity
- Batch reviews instead of reviewing single files
- Configure rate limits to control API usage
- Use cached results when possible
Add proprietary or specialized models:
# ~/.quantum_code/config.yaml
version: "1.0"
models:
my-local-llm:
provider: cli
cli_command: ollama
cli_args: ["run", "codellama"]
aliases: ["local"]For organizations with custom requirements:
- Private model endpoints
- Custom security policies
- Audit logging integration
- SLA-based support
- Documentation: https://github.com/Codewithevilxd/quantum-code
- Issues: https://github.com/Codewithevilxd/quantum-code/issues
- Discussions: GitHub Discussions tab
For commercial support and custom integrations:
- Email: enterprise@codewithevilxd.com
- Features: Custom model integration, enterprise deployment, training
Quantum Code - Multi-Model AI Orchestration for Superior Code Analysis
Ready to supercharge your code reviews? 🚀
pip install quantum-code
quantum --help