Enterprise-grade multi-provider AI chat platform with 45+ models, Firebase cloud configuration, and modern Material 3 design.
Mark VII is a multi LLMs chat application for Android that provides unified access to 45+ state-of-the-art AI models from leading providers including Anthropic, OpenAI, Meta, Deepseek, Mistral, Google, and more through a single, elegant interface.
- 🤖 45+ AI Models - Gemini, GPT, Llama, Deepseek, Mistral, Groq-hosted models, and more
- 🔥 Cloud-First Architecture - Firebase-powered configuration management and real-time sync
- ⚡ High Performance - <100ms startup, connection pooling, optimized streaming
- 🔗 Triple API Support - OpenRouter (40+ models) + Direct Gemini API + Groq (ultra-fast inference)
- 📱 Modern Material 3 UI - Dynamic theming, smooth animations, haptic feedback
- 🌐 Multilingual TTS - Automatic language detection for 15+ languages
- 🔒 Enterprise Security - Firebase Authentication, encrypted storage, HTTPS only
- 🐍 DevOps Tools - Python CLI for bulk model management via CSV
Flexibility: Switch between providers and models instantly without code changes
Reliability: Automatic error recovery, exception handling, offline capability
Performance: 24x faster than v1.x of Mark-VII, real-time streaming, optimized rendering
Developer-Friendly: Complete Firebase integration, comprehensive documentation, open source
Triple API Architecture:
- OpenRouter Integration - Direct access to 100+ models with real-time catalog sync
- Gemini API Integration - Native Google Gemini support with vision capabilities
- Groq Integration - Ultra-fast inference API with all available Groq-hosted models
- Seamless Switching - Toggle between all three APIs mid-conversation
Supported Companies:
- Google - Gemini 2.0/2.5 Flash/Pro, Gemma 2/3
- Anthropic - Claude 3.5 Sonnet, Opus, Haiku
- OpenAI - GPT-4 Turbo, GPT-4o, GPT-3.5
- Meta - Llama 3.1/3.3/4 (8B to 405B parameters)
- Deepseek - Chat V3.1, R1, R1 Distill variants
- Mistral AI - Full lineup from Small to Large
- Groq - Llama 3 (8B/70B), Mixtral 8x7B, Gemma 2, Whisper (via Groq's ultra-fast inference)
- Qwen (Alibaba) - Qwen2.5, Qwen3 Coder
- xAI - Grok with vision support
- 30+ more - Cohere, AI21, Perplexity, and others
- Remote Model Management - Add/remove models without app updates
- Dynamic API Keys - Update credentials in real-time
- Instant Sync - Changes reflect on next app restart
- Secure Vault - API keys stored in Firebase, never in code
- Offline Support - Cached configuration for offline operation
- Exception Handling - Auto-detection and retry for
:freesuffix models - Smart Recovery - Automatic 404 error handling with model correction
- Real-Time Streaming - Server-sent events (SSE) for live responses
- Multi-Modal Support - Text and image understanding (vision models)
- Context Management - 6-message history for optimal performance
- Session Persistence - Cloud-synced chat history with Google Sign-In
- Brand Attribution - Clear provider identification (e.g., "Mark VII x Anthropic")
- Smart Retry - Re-run prompts with different models
- Stop Generation - Cancel responses instantly with red stop button
- Voice I/O - Speech recognition input + multilingual text-to-speech
- PDF Export - Professional formatting with syntax highlighting
- Copy & Share - Easy text extraction and sharing
Material 3 Design:
- Dynamic theming (Light, Dark, System Default)
- iOS-style color palettes with smooth transitions
- Theme-aware status bar and navigation
- No white flash on startup
Performance Optimizations:
- <100ms startup time (24x faster than v1.x)
- Connection pooling for API requests
- Memoized rendering and lazy loading
- Optimized scroll performance
Interaction Design:
- Streaming cursor with haptic feedback
- 40dp touch targets for accessibility
- Smooth Lottie animations
- Auto-scroll to latest messages
- Syntax highlighting for code blocks
- Markdown support with inline formatting
Multilingual Support:
- MLKit language detection (15+ languages)
- Automatic TTS language switching
- Support for: Chinese, Japanese, Korean, Spanish, French, German, Italian, Portuguese, Russian, Arabic, Hindi, English
Efficiently manage 45+ AI models using CSV-based workflows.
Features:
- Bulk Import/Export - Manage models in Excel/Google Sheets
- Validation - Automatic format and field checking
- Version Control - Track changes with Git
- Interactive CLI - Menu-driven operations
- Zero Downtime - Update models without app redeployment
Python CLI Usage:
cd update_models
# Import models from CSV (recommended)
python update_firebase_models.py --csv models.csv
# List current models in Firestore
python update_firebase_models.py --list
# Interactive mode
python update_firebase_models.py
# Export models to CSV
python update_firebase_models.py --export my_models.csvCSV Format:
apiModel,displayName,isAvailable,order
google/gemini-2.0-flash-exp,Gemini 2.0 Flash,TRUE,1
deepseek/deepseek-chat-v3.1,Deepseek Chat V3.1,TRUE,2
anthropic/claude-3-5-sonnet-20241022,Claude 3.5 Sonnet,TRUE,3Benefits:
- Edit 50+ models in Excel/Google Sheets
- Bulk enable/disable models
- Easy reordering with sort priority
- Version control friendly
- Automatic validation
- Android Studio Ladybug or newer
- JDK 17+
- Firebase account (free tier sufficient)
- OpenRouter API key (free tier available at openrouter.ai/keys)
- Optional: Google Gemini API key for direct Gemini access
- Optional: Groq API key for Groq inference (console.groq.com/keys)
git clone https://github.com/daemon-001/Mark-VII.git
cd Mark-VII- Go to Firebase Console
- Click "Add project" → Enter name:
Mark-VII - Disable Google Analytics (optional) → "Create project"
- Click Android icon in Project Overview
- Enter package name:
com.daemon.markvii - Download
google-services.json→ Place inMark-VII/app/directory
- Navigate to Firestore Database → "Create database"
- Select "Start in test mode" → Choose region → "Enable"
Option A: Python Script (Recommended)
cd update_models
pip install firebase-admin
# Download service account key:
# Firebase Console → Project Settings → Service Accounts
# → Generate New Private Key → Save as mark-vii-firebase-service-account-key.json
# Import 45+ models from CSV
python update_firebase_models.py --csv models.csvOption B: Manual Setup
- In Firestore, create collection:
app_config - Create document:
models - Add field:
list(type: array) with model objects:
{
"id": "anthropic/claude-3.5-sonnet",
"name": "Claude 3.5 Sonnet",
"provider": "anthropic",
"isEnabled": true,
"sortOrder": 1,
"apiType": "openrouter"
}- Create document:
api_keys - Add fields:
openrouterApiKey(string) - Get from OpenRoutergeminiApiKey(string, optional) - Get from Google AI StudiogroqApiKey(string, optional) - Get from Groq Console
# In Android Studio:
# 1. File → Open → Select Mark-VII folder
# 2. File → Sync Project with Gradle Files
# 3. Build → Make Project (Ctrl+F9)
# 4. Run → Run 'app' (Shift+F10)First Launch:
- Grant internet permissions
- Sign in with Google (creates cloud-synced chat sessions)
- Select a model and start chatting!
Android Smartphone:
- Download APK from Releases
- Enable "Install from Unknown Sources" in Settings → Security
- Open APK file → Install → Open Mark VII
Android Emulator (PC):
- Download APK from Releases
- Drag APK into emulator window or use APK installer
- Launch Mark VII from app drawer
See Quick Start above for source-based setup.
- Check Firestore structure:
app_config/modelsdocument must havelistfield (notmodels) - Verify
app_config/api_keysdocument hasopenrouterApiKeyfield - Ensure
google-services.jsonis inapp/folder - Check
exp_modelscollection for exception models
- Ensure
groqApiKeyfield is present inapp_config/api_keysFirestore document - Or add your personal key in Settings → Groq API Key → Verify → Enable
- Get a free key from console.groq.com/keys
- Invalid API key
- Get new key from OpenRouter
- Update in Firebase:
app_config/api_keys/openrouterApiKey
- Model may require ":free" suffix
- App automatically adds failing models to
exp_modelscollection - Retry the request after error - it will use the correct format
- Check model exists on OpenRouter Models
- Too many requests
- Wait a few seconds and retry
- Consider upgrading OpenRouter plan
- OpenRouter or model provider temporarily down
- Try a different model
- Wait and retry later
- Clear app data: Settings → Apps → Mark VII → Clear Data
- Reinstall the app
- Check for updates
# Clean and rebuild
./gradlew clean
./gradlew assembleDebugMark VII uses a cloud-first architecture combining Firebase's real-time configuration management with triple AI API access (OpenRouter + Gemini + Groq), enabling zero-downtime model updates and enterprise-grade reliability.
┌─────────────────────────────────────────────────────────────┐
│ Mark VII Android App │
├─────────────────────────────────────────────────────────────┤
│ UI Layer (Jetpack Compose + Material 3) │
│ ├─ MainActivity.kt - Main UI orchestration │
│ ├─ SettingsScreen.kt - Theme & account management │
│ └─ DrawerContent.kt - Navigation & session list │
├─────────────────────────────────────────────────────────────┤
│ ViewModel Layer (MVVM) │
│ └─ ChatViewModel.kt - State management + logic │
├─────────────────────────────────────────────────────────────┤
│ Data Layer │
│ ├─ ChatData.kt - OpenRouter API orchestration │
│ ├─ GeminiClient.kt - Direct Gemini API client │
│ ├─ FirebaseConfigManager - Remote model configuration │
│ ├─ ChatHistoryManager - Session persistence (Cloud) │
│ ├─ AuthManager - Google Sign-In + tokens │
│ └─ ThemePreferences - Local theme storage │
├─────────────────────────────────────────────────────────────┤
│ Network Layer │
│ ├─ Retrofit + OkHttp - HTTP client with pooling │
│ ├─ SSE EventSource - Streaming response parser │
│ └─ Connection Pool - Persistent connections │
└─────────────────────────────────────────────────────────────┘
↓ ↓
┌───────────────────┐ ┌──────────────────────────┐
│ Firebase Services │ │ AI API Providers │
├───────────────────┤ ├──────────────────────────┤
│ Firestore │ │ OpenRouter (100+ models) │
│ Authentication │ │ Direct Gemini API │
│ Analytics │ │ Groq (ultra-fast) │
└───────────────────┘ └──────────────────────────┘
Frontend:
- Kotlin with Jetpack Compose for declarative UI
- Material 3 Design System with dynamic theming
- Lottie for animations, Markdown with syntax highlighting
- StateFlow for reactive state management
Architecture Pattern:
- MVVM (Model-View-ViewModel) with repository pattern
- Coroutines + Flow for asynchronous operations
- Dependency injection via constructor parameters
Backend Integration:
- Firebase Firestore: Cloud configuration + chat history
- Firebase Authentication: Google Sign-In with OAuth
- Firebase Analytics: Usage tracking (optional)
Networking:
- Retrofit 2.11.0 with OkHttp 4.12.0
- Server-Sent Events (SSE) for streaming responses
- Connection pooling for <100ms startup time
AI Providers:
- OpenRouter API: 100+ models with unified interface
- Direct Gemini API: Native Google integration with vision
- Groq API: OpenAI-compatible, ultra-fast inference (Llama 3, Mixtral, Gemma)
- MLKit Language ID: Automatic TTS language detection
Tools:
- Python 3.11+ with Firebase Admin SDK
- CSV-based model management workflow
- Git for version control
Mark-VII/
├── app/
│ ├── google-services.json # Firebase config (download from console)
│ └── src/main/java/com/daemon/markvii/
│ ├── MainActivity.kt # App entry + Firebase initialization
│ ├── ChatViewModel.kt # MVVM state management
│ ├── data/
│ │ ├── Chat.kt # Data models (Message, ChatRequest, etc.)
│ │ ├── ChatData.kt # API orchestration (OpenRouter + Groq)
│ │ ├── GeminiClient.kt # Direct Gemini API client
│ │ ├── OpenRouterApi.kt # OpenRouter Retrofit interface + client
│ │ ├── GroqApi.kt # Groq Retrofit interface + client
│ │ ├── FirebaseConfig.kt # Model configuration models
│ │ ├── FirebaseConfigManager.kt # Firestore config operations
│ │ ├── AuthManager.kt # Google Sign-In + token management
│ │ ├── ChatHistoryManager.kt # Cloud chat session storage
│ │ ├── ThemePreferences.kt # Local theme persistence
│ │ ├── UserApiPreferences.kt # User API key storage (Gemini/OR/Groq)
│ │ └── Keys.kt # App metadata + build info
│ ├── ui/theme/
│ │ ├── Theme.kt # Material 3 theme + AppColors
│ │ └── Color.kt # Theme color definitions
│ ├── SettingsScreen.kt # Theme selector + account UI
│ ├── DrawerContent.kt # Navigation + session management
│ └── utils/
│ └── PdfGenerator.kt # PDF export with syntax highlighting
├── update_models/
│ ├── update_firebase_models.py # Model management CLI
│ ├── models.csv # Pre-configured 49 models
│ └── mark-vii-firebase-service-account-key.json # Service account (download)
├── CHANGELOG.md # Git commit-based version history
├── FIREBASE_SETUP.md # Detailed Firebase setup guide
└── README.md # Project documentation
- Launch - Open Mark VII (cold start <100ms)
- Sign In - Tap profile icon → "Sign in with Google" (optional, enables cloud sync)
- Select Model - Tap dropdown → Choose model (e.g., "Claude 3.5 Sonnet")
- Input - Type message or tap microphone icon for voice input
- Send - Tap send button (▲) → Get real-time streaming response
- Attribution - See response header "Mark VII x Anthropic"
- Stop - Tap red stop button (■) to cancel streaming anytime
- Open model dropdown
- Select different provider mid-conversation to compare responses
- Chat history preserves which API was used per message
- Open drawer (swipe right or tap menu) → Tap Settings icon (⚙️)
- Under "Appearance", tap Theme selector
- Choose theme:
- System Default - Follows device settings (auto light/dark)
- Light - iOS-inspired bright theme with gentle shadows
- Dark - Eye-friendly with OLED-optimized blacks
- Theme applies instantly without restart, status bar updates automatically
- Open navigation drawer (swipe right or hamburger menu)
- View all sessions with preview of last message
- Switch - Tap session to load conversation
- Rename - Long-press → "Rename" → Enter new name
- Delete - Long-press → "Delete" → Confirm
- New Chat - Tap "+" button → Fresh session created
- Cloud Sync - All sessions backup to Firebase when signed in
- Tap plus icon (+) in input field
- Select image from gallery or take photo
- Type question about image (e.g., "What's in this image?")
- Works with vision-capable models: Gemini 2.0 Flash, Claude 3.5, GPT-4o
- Image preview shows pin indicator when attached
- Send to get multi-modal analysis
- Receive AI response in any language (Chinese, Spanish, Arabic, etc.)
- Tap speaker icon on message
- MLKit automatically detects language
- TTS reads response in correct language/accent
- Supports 15+ languages: Chinese (Mandarin), Japanese, Korean, Spanish, French, German, Italian, Portuguese, Russian, Arabic, Hindi, English, Dutch, Polish, Turkish
- Complete conversation with AI model
- Tap 3-dot menu → "Export to PDF"
- PDF generates with:
- Syntax-highlighted code blocks
- Formatted Markdown rendering
- Message timestamps
- Model attribution (e.g., "Mark VII x Anthropic")
- Share via any app or save to storage
- Professional formatting for documentation/reports
- Send message to Model A (e.g., Claude 3.5 Sonnet)
- Get response → Not satisfied with output
- Tap model dropdown → Switch to Model B (e.g., GPT-4o)
- Tap "Retry Last Prompt" button
- Get alternative response from different AI
- Compare responses side-by-side in chat history
- Tap microphone icon in input field
- Grant microphone permission (first time)
- Speak your question naturally
- Speech-to-text transcription appears automatically
- Edit if needed, then send
- Works in all languages supported by Android Speech Recognition
Startup Performance:
- Cold Start: <100ms (24x faster than v1.x)
- Model Loading: <50ms (cached configuration)
- Theme Application: <10ms (instant visual feedback)
- Firebase Init: Asynchronous, non-blocking
Runtime Optimization:
- Streaming Latency: <500ms to first token
- Rendering: 60 FPS with memoized composables
- Memory: <50MB baseline, <150MB during streaming
- Network: Connection pooling reduces request overhead by 70%
- TTS Language Detection: <100ms with MLKit on-device processing
Note: Times vary based on network, server load, and prompt complexity.
| Metric | v1.x (Gemini) | v2.x (OpenRouter) | v3.0+ (Dual API) | v3.3 (Triple API) |
|---|---|---|---|---|
| Startup Time | ~2.65s | ~110ms | <100ms | <100ms |
| Models Available | 5-10 | 100+ | 100+ (dual APIs) | 100++ (three APIs) |
| API Providers | 1 (Google) | Multiple | OpenRouter + Gemini | + Groq |
| Configuration | Hardcoded | Cloud-based | Cloud + real-time | Instant updates |
| Streaming | No | Yes (SSE) | Yes (both APIs) | Yes (all APIs) |
| Error Handling | Basic | Comprehensive | Auto-retry + 404 fix | Resilient |
| TTS Languages | 1-2 | 1-2 | 15+ (MLKit) | 15+ (MLKit) |
| Theme Support | Basic | Light/Dark | L/D/System + iOS-style | Polished |
| Offline Support | No | Limited | Cached config | Works offline |
| Stop Generation | No | Yes | Yes + haptics | User control |
We welcome contributions from the community! Whether you're fixing bugs, adding features, or improving documentation, your help makes Mark VII better.
Reporting Issues:
- Open an issue
- Include: Android version, app version, steps to reproduce
- Add screenshots if helpful
Feature Requests:
- Open a Feature Request with tag
enhancement - Describe the feature and its use case
- Provide mockups/examples if relevant
- Discuss with maintainers before implementing
Code Contributions:
- Fork the repository
- Create feature branch:
git checkout -b feature/your-feature-name - Make changes following Kotlin code style guidelines
- Test thoroughly on multiple devices/Android versions
- Commit with clear messages:
git commit -m "Add: Feature description" - Push to your fork:
git push origin feature/your-feature-name - Open Pull Request with detailed description
// Firebase (BOM manages versions)
firebase-bom:33.7.0
firebase-firestore-ktx // Cloud configuration + chat history
firebase-analytics-ktx // Usage analytics
firebase-auth-ktx // Google Sign-In authentication
// Google Services
play-services-auth:21.3.0 // Google Sign-In UI
mlkit-language-id:17.0.6 // Multilingual TTS detection
// Networking
retrofit:2.11.0 // Type-safe HTTP client
okhttp:4.12.0 // Connection pooling + SSE
gson:2.10.1 // JSON serialization
// PDF Generation
itext7-core:7.2.5 // PDF document creation
html2pdf:4.0.5 // HTML to PDF conversion
// Markdown & Syntax Highlighting
compose-markdown:0.5.4 // Rich text rendering
code-highlight:2.0.0 // Syntax highlighting for code blocks
// UI & Animation
androidx.compose.bom:2024.12.01 // Jetpack Compose
androidx.material3:* // Material 3 components
lottie-compose:6.0.0 // Lottie animations
coil-compose:2.4.0 // Image loading- Android SDK: 24 (Android 7.0 Nougat) or higher
- Target SDK: 35 (Android 15)
- Kotlin: 2.1.0
- Gradle: 8.7
- JDK: 17+
- Encrypted Storage: API keys secured in Firebase, never in source code or APK
- HTTPS Only: All API communication uses TLS 1.3 encryption
- OAuth 2.0: Google Sign-In with secure token management
- Local Storage: Chat history cached locally, synced to Firebase when signed in
- No Tracking: Zero third-party analytics or ad networks
- Firestore Rules: Read/write access restricted to authenticated users
google-services.jsonexcluded from Git via.gitignoremark-vii-firebase-service-account-key.jsonexcluded (Python tool only)- Firebase project rules restrict access to authenticated users
- API keys rotatable without app updates (cloud configuration)
- Sign Out Anytime - Settings → Sign Out → Clears local cache
- Delete Chat Sessions - Long-press session → Delete (removes from cloud)
- Theme Preferences - Stored locally, never synced
- Optional Cloud Sync - Chat history syncs only when signed in with Google
- Offline Mode - Works with cached configuration when no internet
- Never commit
google-services.jsonor service account keys to public repos - Rotate Firebase API keys periodically in Firebase Console
- Use Firebase Authentication rules to restrict Firestore access
- Enable Firebase App Check for production builds to prevent API abuse
Bug Reports:
- GitHub Issues
- Include: Android version, app version, steps to reproduce
Feature Requests:
- Feature Request Form
- Describe use case and expected behavior
Documentation:
- CHANGELOG.md - Version history with git timestamps
- FIREBASE_SETUP.md - Detailed Firebase configuration
- README.md - Comprehensive project documentation
Email Support:
- Developer: nitesh.kumar4work@gmail.com
- Response time: 1-12 hours
Developer:
- Name: Nitesh Kumar
- GitHub: @daemon-001
- LinkedIn: @daemon001
Project:
- Repository: Mark-VII
- Stars: Give us a ⭐ if you find this useful!
- Forks: Welcome - see Contributing section
Every contribution, no matter how small, helps make Mark VII better for everyone!
Built with ❤️ by Nitesh
Home • Download • Report Bug • Request Feature
Enjoy your advanced multi-LLMs AI chatbot experience!