This project implements federated learning for a recommendation model, specifically designed for Android devices. It allows multiple Android clients to collaboratively train a recommendation model without sharing raw user data.
recommendation_fl/
├── client/ # Android client application
│ ├── app/
│ │ ├── src/main/
│ │ │ ├── java/flwr/android_client/
│ │ │ │ ├── RecommendationMainActivity.java # Main UI
│ │ │ │ ├── RecommendationFlowerClient.java # FL client logic
│ │ │ │ ├── RecommendationFlowerWorker.java # Background worker
│ │ │ │ └── RecommendationModelWrapper.java # TensorFlow Lite wrapper
│ │ │ ├── assets/
│ │ │ │ └── recommendation_model/ # TFLite model
│ │ │ └── res/ # UI resources
│ │ └── build.gradle # Build configuration
├── server/ # Python FL server
│ └── recommendation_server.py # FL server implementation
├── requirements.txt # Python dependencies
└── README.md # This file
- Federated Learning: Collaborative training across multiple Android devices
- Recommendation Model: TensorFlow Lite model for user behavior recommendations
- Privacy-Preserving: No raw user data leaves the device
- Real-time Training: Background training with WorkManager
- Synthetic Data: Generates realistic training data for demonstration
- User Interface: Simple interface to configure server connection
- Background Processing: Training runs in background using WorkManager
- Model Management: Loads and manages TensorFlow Lite recommendation model
- Data Generation: Creates synthetic user behavior data for training
- Progress Monitoring: Real-time status updates and logging
- Adaptive Training: Adjusts training parameters based on round number
- Client Management: Handles multiple Android clients
- Federated Averaging: Implements FedAvg strategy for model aggregation
- Configurable Rounds: Supports configurable number of training rounds
- Android Studio (latest version)
- Python 3.8+ with pip
- Android device or emulator (API level 24+)
cd server
pip install -r ../requirements.txtcd client
./gradlew assembleDebug# Install on connected device/emulator
adb install app/build/outputs/apk/debug/app-debug.apkcd server
python recommendation_server.py- Open the "Recommendation FL Client" app
- Enter server IP (use
10.0.2.2for Android emulator) - Enter server port (
8080) - Enter data slice number (unique for each client)
- Click "Start" to begin federated learning
- Port: 8080 (configurable in
recommendation_server.py) - Minimum Clients: 2 (configurable)
- Training Rounds: 10 (configurable)
- Strategy: FedAvg with adaptive parameters
- Model Path:
assets/recommendation_model/recommendation.tflite - Training Data: 100 synthetic samples (configurable)
- Features: 10-dimensional user behavior vectors
- Output: Rating predictions (0-5 scale)
The recommendation model processes:
- Device Features: Device ID, OS, gender
- Behavior Features: App usage, screen time, battery drain
- Usage Patterns: Apps installed, data usage, age
- Output: Compatibility rating (0-5 scale)
- Client Registration: Android clients connect to FL server
- Model Distribution: Server sends initial model weights
- Local Training: Each client trains on local synthetic data
- Weight Aggregation: Server aggregates model updates
- Model Update: Server distributes improved model
- Evaluation: Clients evaluate model performance
- Iteration: Process repeats for specified rounds
- Start server:
python recommendation_server.py - Install and run Android app
- Configure connection (IP:
10.0.2.2, Port:8080) - Start federated learning
- Monitor logs and progress
- Start server
- Install app on multiple devices/emulators
- Configure each with unique data slice numbers
- Start all clients simultaneously
- Observe collaborative training
- Client Logs: Check Android logcat with tag "RecommendationFlower"
- Server Logs: Monitor Python console output
- Training Progress: View in-app status and log display
- Model Performance: Track loss and MAE metrics
- Local Training: All training happens on device
- Weight Sharing: Only model weights are shared (no raw data)
- Secure Communication: gRPC over HTTP/2
- Data Isolation: Each client maintains separate data
- Synthetic Data: Current implementation uses generated data
- Model Size: Limited by TensorFlow Lite constraints
- Network Dependency: Requires stable internet connection
- Battery Usage: Training can be resource-intensive
- Real User Data: Integrate with actual user behavior data
- Advanced Models: Support for larger, more complex models
- Heterogeneous FL: Handle different client capabilities
- Privacy Techniques: Implement differential privacy
- Model Compression: Optimize model size for mobile
Contributions are welcome! Please feel free to submit issues, feature requests, or pull requests.
This project follows the same license as the SoraChain Framework.