Signify is an AI-powered Sign Language Translator that leverages Computer Vision and Deep Learning to recognize hand gestures and translate them into meaningful text and speech in real time.
This project is designed with a clear, scalable roadmap β starting from basic gesture recognition and progressing toward full sentence formation and text-to-speech output.
Communication barriers faced by the hearing- and speech-impaired community can be significantly reduced using intelligent systems.
Signify aims to bridge this gap by converting sign language gestures into readable and audible outputs using a webcam-based real-time system.
The project follows a modular and phase-wise development approach, making it easy to extend and improve.
- Static hand gesture recognition
- Motion-based gesture recognition using LSTM
- Real-time gesture detection via webcam
- Sentence formation from continuous gestures
- Text-to-Speech (TTS) output
- Clean, modular, and scalable architecture
- Static Gesture Recognition
- Motion-Based Gesture Recognition
- Sentence Formation
- Text-to-Speech Integration
- Web / Mobile Deployment
- Programming Language: Python
- Computer Vision: OpenCV, MediaPipe
- Deep Learning: TensorFlow, Keras
- Numerical Computing: NumPy
- Web Interface: Streamlit
Signify/ βββ src/ # Core source code βββ app.py # Main application βββ web_app.py # Web interface βββ requirements.txt # Dependencies βββ .gitignore # Ignored files βββ README.md # Project documentation
- Clone the repository:
git clone https://github.com/Aishu-yk/Signify.git
cd Signify- Install Dependencies:
pip install -r requirements.txt
- Run the Application:
python app.py
(For web interface)
streamlit run web_app.py
Aishwarya Y K (Aishu)
B.Tech β Artificial Intelligence & Data Science
GitHub: https://github.com/Aishu-yk