A modern, real-time text prediction application powered by LSTM neural networks. Features a sleek UI with instant word suggestions as you type, similar to modern text editors and IDEs.
- Real-time Predictions: LSTM-powered word predictions as you type
- Modern UI: Glassmorphic design with smooth animations
- Instant Suggestions: Tab to accept, Esc to dismiss
- Live Statistics: Real-time word and character counts
- Smart Cursor: Maintains cursor position after predictions
- Server Status: Visual connection indicator
- Copy to Clipboard: One-click text copying
- Responsive Design: Works on desktop and mobile
- Python 3.9+
- Flask - Web framework
- TensorFlow 2.19.0 - LSTM model
- Flask-CORS - Cross-origin requests
- HTML5 - Semantic markup
- CSS3 - Modern styling with glassmorphism
- Vanilla JavaScript - Real-time interactions
- REST API - Backend communication
- Python 3.9 or higher
- Node.js (for development server, optional)
- Modern web browser
-
Clone the repository
git clone https://github.com/yourusername/lstm-text-predictor.git cd lstm-text-predictor -
Set up the backend
cd backend pip install -r requirements.txt -
Start the Flask server
python app.py
Server will run on
http://localhost:5000 -
Launch the frontend
cd ../frontend # Open index.html in your browser or use a local server python -m http.server 8080
Frontend available at
http://localhost:8080
lstm-text-predictor/
โโโ backend/
โ โโโ app.py # Flask API server
โ โโโ model.py # LSTM model definition
โ โโโ train.py # Model training script
โ โโโ requirements.txt # Python dependencies
โ โโโ models/ # Saved model files
โโโ frontend/
โ โโโ index.html # Main HTML file
โ โโโ style.css # Styling and animations
โ โโโ script.js # JavaScript functionality
โ โโโ assets/ # Static assets
โโโ data/
โ โโโ training_data.txt # Text corpus for training
โโโ README.md
- Start typing in the text editor
- View predictions in the suggestions panel
- Press Tab to accept the first suggestion
- Press Esc to clear suggestions
- Click suggestions to insert them directly
- Use keyboard shortcuts for efficient editing
| Shortcut | Action |
|---|---|
Tab |
Accept first prediction |
Esc |
Clear predictions |
Ctrl + Enter |
New line |
Arrow Keys |
Navigate suggestions |
Check server status
{
"status": "healthy",
"model_loaded": true
}Get word predictions
// Request
{
"text": "The quick brown",
"max_predictions": 5
}
// Response
{
"predictions": ["fox", "dog", "cat", "bird", "rabbit"]
}- Modify
frontend/style.cssfor visual changes - Glassmorphic design with CSS backdrop-filter
- Responsive grid layout
- Adjust LSTM layers in
backend/model.py - Modify sequence length for different contexts
- Change vocabulary size for domain-specific text
- Edit
backend/app.pyfor custom prediction algorithms - Implement confidence scoring
- Add text preprocessing steps
Server Connection Failed
# Check if Flask server is running
curl http://localhost:5000/healthModel Loading Error
# Verify TensorFlow installation
python -c "import tensorflow as tf; print(tf.__version__)"CORS Issues
# Ensure flask-cors is installed
pip install flask-cors# Backend with auto-reload
export FLASK_ENV=development
python app.py
# Frontend with live reload (optional)
npm install -g live-server
live-server frontend/# Test API endpoints
python -m pytest tests/
# Frontend testing in browser console
window.textPredictor.checkServerStatus()- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow PEP 8 for Python code
- Use ES6+ JavaScript features
- Add comments for complex logic
- Test on multiple browsers
- Ensure responsive design
This project is licensed under the MIT License - see the LICENSE file for details.
- TensorFlow team for the amazing ML framework
- Flask community for the lightweight web framework
- Open source datasets for training data
- Modern web standards for glassmorphic design inspiration
- Prediction Speed: < 100ms average response time
- Model Size: ~50MB (optimized for web deployment)
- Memory Usage: ~200MB during inference
- Browser Support: Chrome 90+, Firefox 88+, Safari 14+
- Multi-language support
- Context-aware predictions
- User personalization
- Voice input integration
- Mobile app version
- Cloud deployment ready
- Real-time collaboration
โญ Star this repo if you find it helpful!