A smart healthcare companion mobile application designed to bridge the gap between patients and preliminary medical diagnosis using Artificial Intelligence.
This project demonstrates the power of AI-Assisted Engineering. By leveraging tools like Claude 3.5 Sonnet and GPT-4 for code generation and architectural guidance, we achieved a 60% reduction in R&D time.
> Comparative analysis showing over 30 hours of development time saved across Research, Coding, and Testing phases.
- UI Development: Built the complete mobile interface using Flutter.
- System Integration: Connected the FastAPI backend services with the mobile frontend.
- Feature Assembly: Orchestrated various AI modules into a cohesive application. Challenge: Artificial Intelligence in Healthcare
A comprehensive healthcare application built with Flutter frontend and FastAPI backends, featuring AI-powered medical analysis tools including symptom checking, medical image classification, and lab report analysis.
| Name | Role | |
|---|---|---|
| Seif-Eldeen Mostafa | AI Engineer | seifeldeen.320240021@ejust.edu.eg |
| Mohamed Ahmed | Back End Engineer | Mohamed.320240078@ejust.edu.eg |
| Sara Youssef | Front End Engineer | sara.320240074@ejust.edu.eg |
| Arwa Waleed | Academic Writing | arwa.320240128@ejust.edu.eg |
| Judy Kamal | Academic Writing | Joudy.320240043@ejust.edu.eg |
https://drive.google.com/file/d/1d5KCNWoKFrYilVkajv_WNTjq-wIuKZGT/view?usp=sharing
You can download and try the latest build of the AI in Healthcare application from the link below:
- Intelligent Disease Prediction: XGBoost machine learning model for accurate symptom analysis
- Interactive Symptom Selection: Search and select from 132+ medical symptoms with intuitive UI
- Confidence Scoring: Ranked disease predictions with detailed confidence percentages
- Real-time Analysis: Fast processing with comprehensive medical insights
- Cancer Detection: Advanced TensorFlow EfficientNetV2S model for medical image analysis
- Multi-source Input: Camera capture and gallery upload support
- Instant Results: Real-time image processing with detailed confidence scores
- Professional Analysis: Structured prediction results with medical interpretations
- AI-Powered Document Analysis: Hugging Face inference for intelligent lab report interpretation
- Document Processing: Support for various lab report formats and images
- Structured Results: Organized analysis with summary, key findings, and medical interpretations
- Comprehensive Insights: Detailed explanations and healthcare recommendations
Before setting up the project, ensure you have the following installed:
- Flutter SDK: Version 3.0 or higher
- Python: Version 3.8 or higher
- Git: Latest version
- Android Studio/VS Code: For development
- Python Package Manager: pip
-
Clone the Repository
git clone https://github.com/Tantawi65/Healthcare.git cd Healthcare -
Install Flutter Dependencies
flutter pub get
-
Configure Flutter Environment
flutter doctor
Ensure all requirements are met before proceeding.
-
Run the Flutter Application
# For Android flutter run # For iOS (macOS only) flutter run -d ios # For Web flutter run -d web
The application consists of three separate FastAPI backend services that need to be started individually.
# Navigate to the service directory
cd backend/Text_classification/symptom_checker
# Create and activate virtual environment
python -m venv venv
# Windows
venv\Scripts\activate
# macOS/Linux
source venv/bin/activate
# Install required dependencies
pip install -r requirements.txt
# Start the FastAPI server
uvicorn main:app --host 0.0.0.0 --port 8002 --reload# Navigate to the service directory
cd backend/Image_classification
# Create and activate virtual environment
python -m venv venv
# Windows
venv\Scripts\activate
# macOS/Linux
source venv/bin/activate
# Install required dependencies
pip install -r requirements.txt
# Start the FastAPI server
uvicorn main:app --host 0.0.0.0 --port 8000 --reload# Navigate to the service directory
cd backend/Lab_analysis
# Create and activate virtual environment
python -m venv venv
# Windows
venv\Scripts\activate
# macOS/Linux
source venv/bin/activate
# Install required dependencies
pip install -r requirements.txt
# Set up environment variables
cp .env.example .env
# Edit .env file and add your Hugging Face API key:
# HUGGINGFACE_API_KEY=your-api-key-here
# Start the FastAPI server
uvicorn main:app --host 0.0.0.0 --port 8003 --reloadCreate a .env file in each backend service directory with the following variables:
# Hugging Face API Key for Lab Analysis
HUGGINGFACE_API_KEY=your-huggingface-api-key-here
# API Configuration
SYMPTOM_CHECKER_PORT=8002
IMAGE_CLASSIFICATION_PORT=8000
LAB_ANALYSIS_PORT=8003
# CORS Origins (for production)
CORS_ORIGINS=http://localhost:3000,https://yourdomain.comUpdate the API endpoints in lib/services/api_service.dart if needed:
class ApiService {
static const String symptomCheckerBaseUrl = 'http://localhost:8002';
static const String imageClassificationBaseUrl = 'http://localhost:8000';
static const String labAnalysisBaseUrl = 'http://localhost:8003';
}flutter: SDK frameworkhttp: API communicationimage_picker: Camera and gallery accessgo_router: Navigation managementprovider: State management
fastapi: Web frameworkuvicorn: ASGI serverxgboost: Machine learning (Symptom Checker)tensorflow: Deep learning (Image Classification)huggingface-hub: AI inference (Lab Analysis)python-multipart: File upload handlingpython-jose: JWT token handlingpillow: Image processing
-
Health Checks:
- Symptom Checker: http://localhost:8002/health
- Image Classification: http://localhost:8000/health
- Lab Analysis: http://localhost:8003/health
-
API Documentation:
- Symptom Checker: http://localhost:8002/docs
- Image Classification: http://localhost:8000/docs
- Lab Analysis: http://localhost:8003/docs
# Test Symptom Checker
curl -X POST "http://localhost:8002/api/check-symptoms" \
-H "Content-Type: application/json" \
-d '{"symptoms": ["fever", "cough", "headache"]}'
# Test Image Classification
curl -X POST "http://localhost:8000/api/classify-image" \
-F "image=@/path/to/medical/image.jpg"
# Test Lab Analysis
curl -X POST "http://localhost:8003/api/analyze-lab" \
-F "image=@/path/to/lab/report.pdf"# Android APK
flutter build apk --release
# Android App Bundle
flutter build appbundle --release
# iOS (macOS only)
flutter build ios --release
# Web
flutter build web --releaseFor production deployment, consider using:
- Docker containers for each service
- Environment variable management
- Load balancing for high availability
- API rate limiting and authentication
- Database integration for user data persistence
- Model Files: AI model files are excluded from the repository due to size constraints. You'll need to train your own models or obtain pre-trained models separately.
- API Keys: Never commit API keys to version control. Use environment variables for all sensitive configuration.
- Security: This application is for educational and research purposes. Implement proper authentication and security measures for production use.
This application is developed for educational and research purposes only. It should not be used as a substitute for professional medical advice, diagnosis, or treatment. Always consult with qualified healthcare professionals for medical concerns.
You can download and try the latest build of the AI in Healthcare application from the link below:
The backend services are deployed and accessible online for testing and demonstration purposes.
Each service runs on a separate FastAPI instance with integrated AI models.
Analyze lab reports using AI-powered document understanding: π https://tantawi-lab-analyzer.hf.space/
Classify medical images and detect cancerous patterns: π https://tantawi-image-class.hf.space/docs
Predict possible diseases based on user-entered symptoms: π https://tantawi-text-classification.hf.space/docs
This project integrates Generative AI tools throughout the development process to enhance productivity, code quality, and documentation.
All screenshots, prompts, and AI responses used during development are organized and available in the repository under the folder "AI Usage".
For technical support or project inquiries, please contact any team member using the email addresses provided above.
Built with β€οΈ by the GP-Tea Team for advancing AI in Healthcare