Langlish is a real-time English learning voice assistant that uses OpenAI's speech-to-speech technology to provide immersive and interactive language practice. Learners can speak naturally and receive immediate spoken feedback, corrections, and guidance from an AI English tutor. Langlish helps expand vocabulary by offering clear explanations, contextual usage, and interactive conversation, making learning new words intuitive and engaging.
- Real-time speech-to-speech communication
- OpenAI Real-time API integration
- Conversational English learning
- Grammar corrections and vocabulary help
- Natural voice interaction with audio processing
- Modern web interface with React and TypeScript
- FastAPI backend with WebSocket support
- AWS S3 integration for audio storage - Save and archive conversation audio files
- MLflow integration for experiment tracking
- Apache Airflow for workflow orchestration
- Python 3.11 or higher
- Node.js 18 or higher
- npm or pnpm package manager
- uv package manager (recommended for backend)
- OpenAI API key with Real-time API access
- FFmpeg (for audio processing)
- AWS account (for audio storage)
- Docker and Docker Compose (for containerized setup)
- At least 4GB RAM (6GB+ recommended for full stack with Airflow)
git clone https://github.com/fdgbatarse1/langlish.git
cd langlishUbuntu/Debian:
sudo apt update && sudo apt install ffmpegmacOS:
brew install ffmpegWindows: Download from ffmpeg.org or use Chocolatey:
choco install ffmpegCreate a .env file in the root directory:
# For Docker setup (if using)
AIRFLOW_UID=$(id -u) # On Linux, use your actual user ID
# Airflow default credentials (change in production)
_AIRFLOW_WWW_USER_USERNAME=admin
_AIRFLOW_WWW_USER_PASSWORD=admin- Navigate to the backend directory:
cd backend- Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | sh- Create virtual environment and install dependencies:
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip install -e .- Create backend
.envfile:
cat > .env << EOF
# OpenAI Configuration (REQUIRED)
OPENAI_API_KEY=your_openai_api_key_here
# AWS S3 Configuration (OPTIONAL - for audio storage)
AWS_ACCESS_KEY_ID=your_aws_access_key_id_here
AWS_SECRET_ACCESS_KEY=your_aws_secret_access_key_here
AWS_S3_BUCKET_NAME=your_s3_bucket_name_here
AWS_S3_REGION=us-east-1
# MLflow Configuration (automatically configured if using Docker)
# MLFLOW_TRACKING_URI=http://localhost:5000
EOFFor detailed AWS S3 setup instructions, see backend/AWS_S3_SETUP.md.
- Navigate to the frontend directory:
cd ../frontend- Install dependencies:
npm install
# or
pnpm install- Start the backend server (in backend directory):
cd ../backend
source .venv/bin/activate # On Windows: .venv\Scripts\activate
python -m uvicorn main:app --reload- In a new terminal, start the frontend (in frontend directory):
cd frontend
npm run dev
# or
pnpm dev- Open your browser and navigate to
http://localhost:5173
- Ensure Docker and Docker Compose are installed
- Create necessary
.envfiles as described above - Run the entire stack:
# Start all services (Backend, Frontend, MLflow, Airflow)
docker compose up -d
# Or start only Backend and Frontend
docker compose up backend frontend -d
# View logs
docker compose logs -fAccess the services:
- Frontend: http://localhost:5173
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
- MLflow UI: http://localhost:5000
- Airflow UI: http://localhost:8080 (username: admin, password: admin)
For detailed Docker setup, see DOCKER_SETUP.md.
- Start a conversation: Click the blue microphone button
- Speak naturally: Ask questions, practice conversation, or request help
- Stop recording: Click the red button
- Listen and learn: Hear Langlish's response and continue the conversation
langlish/
├── backend/ # FastAPI backend with WebSocket support
│ ├── main.py # Application entry point
│ ├── src/ # Source code
│ ├── tests/ # Backend tests
│ └── README.md # Backend-specific documentation
├── frontend/ # React frontend application
│ ├── src/ # React components and logic
│ ├── public/ # Static assets
│ ├── dist/ # Build output
│ └── README.md # Frontend-specific documentation
├── lifecycle/ # MLflow experiment tracking
│ └── README.md # MLflow setup guide
├── airflow/ # Apache Airflow workflows
│ ├── dags/ # DAG definitions
│ └── README.md # Airflow setup guide
├── docker-compose.yml # Multi-service Docker configuration
├── DOCKER_SETUP.md # Docker setup instructions
├── README.md # This file
└── LICENSE # Project license
cd backend
pytest # Run tests
mypy . # Type checking
ruff check . # Linting
ruff format . # Formattingcd frontend
npm run test # Run tests
npm run lint # Linting
npm run type-check # Type checking
npm run build # Build for production- Ensure browser has microphone permissions
- Check system privacy settings
- For production, HTTPS is required for microphone access
- Verify backend is running on port 8000
- Check for firewall or proxy blocking WebSocket connections
- Ensure frontend environment variables are correct
- Verify FFmpeg is installed:
ffmpeg -version - Ensure FFmpeg is in your system PATH
- Restart terminal after installation
- Verify your API key has Real-time API access
- Check API key is correctly set in backend/.env
- Monitor OpenAI API status page for outages
- Increase Docker memory allocation (minimum 4GB)
- Run fewer services:
docker compose up backend frontend -d - Check Docker Desktop settings
-
Check service logs:
- Backend: Check terminal output or
docker compose logs backend - Frontend: Check browser console
- Docker:
docker compose logs -f [service-name]
- Backend: Check terminal output or
-
Verify service health:
- Backend: http://localhost:8000/health
- Frontend: http://localhost:5173
- MLflow: http://localhost:5000
- Airflow: http://localhost:8080/health
-
For more help, see individual service READMEs in their respective directories.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes
- Add tests if applicable
- Run the development checks
- Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.


