The following versions of ArbitrageAI are currently being supported with security updates:
| Version | Supported |
|---|---|
| 0.1.x | ✅ |
| 0.0.x | ❌ |
We take the security of ArbitrageAI seriously. If you believe you have found a security vulnerability, please report it to us as described below.
Please do NOT report security vulnerabilities through public GitHub issues.
Instead, please report them via one of the following methods:
- Email: Send an email to security@arbitrageai.example (replace with actual security contact)
- GitHub Private Vulnerability Reporting: Use the GitHub Security Advisories feature
- Direct Contact: Contact the maintainers directly through GitHub
Please include the following information in your report:
- Description: A clear description of the vulnerability
- Impact: The potential impact of the vulnerability
- Reproduction Steps: Detailed steps to reproduce the issue
- Affected Versions: Which versions are affected
- Proof of Concept: If possible, include a proof of concept or exploit code
- Suggested Fix: If you have suggestions for how to fix the issue
You can expect the following response timeline:
- Acknowledgment: Within 48 hours of your report
- Status Update: Within 5 business days with our assessment
- Resolution: We aim to resolve critical issues within 30 days
After submitting a report:
- Acknowledgment: We will acknowledge receipt of your report
- Assessment: We will investigate and assess the reported issue
- Communication: We will keep you informed of our progress
- Resolution: Once resolved, we will notify you and credit you (if desired)
- Disclosure: We will coordinate responsible disclosure
When deploying ArbitrageAI, please follow these security best practices:
Never commit sensitive information to version control. Use environment variables for:
# Required security configurations
# IMPORTANT: Generate secure secrets using: python scripts/generate_secrets.py
SECRET_KEY=[GENERATE_SECURE_SECRET]
JWT_SECRET=[GENERATE_SECURE_SECRET]
DATABASE_URL=postgresql://user:pass@localhost/db
REDIS_URL=redis://localhost:6379/0
OPENAI_API_KEY=[YOUR_API_KEY]
STRIPE_SECRET_KEY=[YOUR_STRIPE_KEY]- Run containers as non-root user (configured by default)
- Use Docker secrets for sensitive data in production
- Keep Docker images updated
- Scan images for vulnerabilities regularly
- Use HTTPS/TLS for all external communications
- Configure firewalls to restrict access
- Use private networks for internal services
- Enable rate limiting (configured by default)
- Use strong, unique passwords
- Enable SSL/TLS for database connections
- Regular backups with encryption
- Apply principle of least privilege
When contributing to ArbitrageAI, follow these security guidelines:
- Input Validation: Always validate and sanitize user input
- Output Encoding: Encode output to prevent XSS
- Authentication: Verify authentication before authorization checks
- SQL Injection: Use parameterized queries (SQLAlchemy ORM)
- Path Traversal: Validate file paths and use safe joins
- Command Injection: Never use shell=True with user input
- Code Injection: Never use
eval(),exec(), orcompile()with user input. Use safe expression parsers instead
As of QAQC-001 (March 2026), ArbitrageAI uses a safe AST-based expression parser for alert conditions:
- No eval(): The dangerous
eval()function has been completely removed - AST-based parsing: Expressions are parsed into an Abstract Syntax Tree
- Whitelist approach: Only explicitly allowed operations are permitted
- Blocked operations: Function calls, attribute access, imports, and other dangerous operations are rejected
- Input validation: All expressions are validated before evaluation
Example of safe expression usage:
from src.utils.logging_alerting import AlertManager
alert_manager = AlertManager()
# Safe: Only allows comparisons and arithmetic
result = alert_manager._evaluate_condition("cpu_usage > 80", {"cpu_usage": 90})
# Blocked: Function calls raise ValueError
alert_manager._evaluate_condition("eval('malicious')", {}) # Raises ValueError- Keep dependencies updated
- Review security advisories for dependencies
- Use
pip-auditandsafetyto scan for vulnerabilities - Pin dependency versions in production
- Never commit secrets to version control
- Use environment variables or secret management tools
- Rotate secrets regularly
- Use different secrets for different environments
- Write security-focused tests
- Test for common vulnerabilities (OWASP Top 10)
- Include security scanning in CI/CD
- Perform regular security audits
ArbitrageAI includes the following security features:
- JWT-based authentication
- Role-based access control (RBAC)
- Session management with secure cookies
- API key authentication for service accounts
- Pydantic models for request validation
- Type checking and sanitization
- File upload validation (type, size, content)
- Rate limiting to prevent abuse
- Encryption at rest (database encryption)
- Encryption in transit (TLS/SSL)
- Secure secret management
- Audit logging for sensitive operations
The application includes comprehensive security headers:
- Strict-Transport-Security (HSTS): Forces HTTPS
- Content-Security-Policy (CSP): Prevents XSS attacks
- X-Content-Type-Options: Prevents MIME sniffing
- X-Frame-Options: Prevents clickjacking
- X-XSS-Protection: Legacy XSS filter
- Referrer-Policy: Controls referrer information
- Permissions-Policy: Controls browser features
- Cross-Origin-Policies: Isolation policies
- Comprehensive audit logging
- Security event monitoring
- Intrusion detection alerts
- Distributed tracing for security analysis
We follow a coordinated disclosure process:
- Report: Researcher reports vulnerability
- Verify: We verify the vulnerability
- Fix: We develop and test a fix
- Release: We release a security patch
- Disclose: After 30 days, we publicly disclose
We request that researchers:
- Allow us reasonable time to fix the issue before public disclosure
- Work with us confidentially during the fix process
- Refrain from exploiting the vulnerability for testing
Our CI/CD pipeline includes:
- Bandit: Python security linter
- pip-audit: Dependency vulnerability scanning
- Safety: Alternative dependency scanner
- Gitleaks: Secret detection
- Detect-secrets: Pre-commit secret detection
We perform regular:
- Code security reviews
- Penetration testing
- Threat modeling
- Security architecture reviews
This section lists known vulnerabilities and their status:
| ID | Severity | Status | Fixed In | Notes |
|---|---|---|---|---|
| QAQC-001 | CRITICAL | Fixed | v0.1.0 | Replaced dangerous eval() with safe AST-based expression parser in logging_alerting.py |
| CVE ID | Severity | Status | Fixed In | Notes |
|---|---|---|---|---|
| - | - | - | - | No known unresolved vulnerabilities |
Security updates are released as patch versions (e.g., 0.1.1, 0.1.2). We recommend:
- Critical: Update within 24 hours
- High: Update within 7 days
- Medium: Update within 30 days
- Low: Update in next maintenance cycle
In the event of a security incident:
- Containment: Isolate affected systems
- Assessment: Determine scope and impact
- Eradication: Remove threat and vulnerabilities
- Recovery: Restore systems and data
- Lessons Learned: Document and improve
ArbitrageAI is designed to help with compliance for:
- GDPR: Data protection and privacy
- SOC 2: Security controls
- ISO 27001: Information security management
- PCI DSS: Payment card data security (when configured properly)
We rely on the following security-critical dependencies:
- FastAPI: Web framework with built-in security features
- SQLAlchemy: ORM with parameterized queries
- Pydantic: Data validation and security
- Redis: Caching and distributed locking
- OpenTelemetry: Security monitoring and tracing
Monitor these sources for security updates:
For security-related questions:
- Email: security@arbitrageai.example
- GitHub: https://github.com/anchapin/ArbitrageAI/security
- Discussions: https://github.com/anchapin/ArbitrageAI/discussions/categories/security
We would like to thank the following for their contributions to our security:
- Security researchers who responsibly disclose vulnerabilities
- The open-source community for security tools and libraries
- Contributors who help improve our security posture
Last Updated: March 2, 2026
Version: 1.0
Review Cycle: This policy is reviewed quarterly and updated as needed.