A sophisticated AI chess engine powered by deep learning and Monte Carlo Tree Search (MCTS). Features a responsive GUI, opening books, endgame tablebases, and AlphaZero-style self-play training.
- Deep Neural Network: Custom PyTorch CNN with residual blocks for position evaluation
- Monte Carlo Tree Search: Intelligent move selection with 800+ simulations per move
- Opening Book: Million+ position database from master games
- Endgame Tablebases: Perfect play using Syzygy tablebases
- Self-Play Learning: Continuous improvement through reinforcement learning
- Smooth Gameplay: Pygame-based responsive interface
- Visual Feedback: Move highlighting, legal moves, and thinking animations
- Real-time Stats: Live move history, evaluation scores, and game analysis
- Non-blocking UI: Threaded bot calculations keep interface responsive
- Supervised Learning: Train on PGN databases of master games
- Reinforcement Learning: Generate training data through self-play
- Model Checkpointing: Automatic saving and version management
- Performance Monitoring: TensorBoard integration with loss tracking
| Metric | Value |
|---|---|
| Estimated Elo | 1800-2200 |
| Policy Loss | 6.5 โ 2.8 (after training) |
| Value Loss | 1.0 โ 0.86 (after training) |
| Search Speed | 800 simulations in 2-5s |
| Opening Positions | 500,000+ |
# Clone repository
git clone https://github.com/your-username/chess-ml-bot.git
cd chess-ml-bot
# Create virtual environment
python -m venv chess_env
source chess_env/bin/activate # Windows: chess_env\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Install CUDA PyTorch (for GPU training)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118python main.py --interface gui# Supervised training
python train_model.py --mode supervised --epochs 30 --batch-size 64
# Self-play training
python train_model.py --mode self_play --games 100- Click: Select and move pieces
- N: New game
- U: Undo move
- F: Flip board
- A: Analysis mode
# CLI gameplay
python main.py --interface cli
# Analysis mode
python main.py --analyze position.fen
# Tournament mode
python tournament.py --games 50๐ฆ chess-ml-bot/
โโโ ๐ง core/ # AI Engine
โ โโโ engine.py # Main chess engine
โ โโโ neural_net.py # PyTorch neural network
โ โโโ search.py # MCTS implementation
โ โโโ evaluation.py # Position evaluation
โโโ ๐ฎ ui/ # User Interfaces
โ โโโ gui.py # Pygame GUI
โ โโโ cli.py # Command line
โโโ ๐ training/ # ML Training
โ โโโ trainer.py # Training pipeline
โ โโโ reinforcement.py # Self-play learning
โ โโโ data_loader.py # Data processing
โโโ โก features/ # Advanced Features
โ โโโ opening_book.py # Opening database
โ โโโ tablebase.py # Endgame tablebases
โ โโโ time_manager.py # Time allocation
โโโ ๐ data/ # Data Storage
โโโ models/ # Trained models
โโโ opening_books/ # PGN databases
โโโ training_data/ # Training datasets
The model uses a ResNet-inspired architecture:
Input: 14ร8ร8 board representation
โโโ Convolutional layers (3ร3 kernels)
โโโ 12ร Residual blocks (256 filters each)
โโโ Batch normalization + ReLU activation
โโโ Dual heads:
โโโ Policy head โ 4096 possible moves
โโโ Value head โ Position evaluation (-1 to +1)- Policy Loss: 6.53 โ 2.80 (-57%)
- Value Loss: 1.02 โ 0.86 (-16%)
- Total Loss: 7.54 โ 3.66 (-51%)
# Download master games (example: Lichess database)
wget https://database.lichess.org/standard/lichess_db_standard_rated_2023-01.pgn.bz2
bunzip2 lichess_db_standard_rated_2023-01.pgn.bz2
mv lichess_db_standard_rated_2023-01.pgn data/opening_books/master_games.pgnEdit config.json:
{
"model": {
"layers": 12,
"channels": 256,
"learning_rate": 0.001
},
"training": {
"epochs": 30,
"batch_size": 64,
"device": "cuda"
}
}python train_model.py --config config.jsonfrom core.neural_net import ChessNet
# Create custom model
model = ChessNet(
input_channels=14,
residual_blocks=20,
filters=512
)from core.engine import ChessEngine
# Initialize engine
engine = ChessEngine()
best_move = engine.get_best_move()from training.reinforcement import SelfPlayLearning
# Start self-play
trainer = SelfPlayLearning(model)
trainer.run_self_play(num_games=1000)1. e4 e5 2. Nf3 Nc6 3. Bb5 a6 4. Ba4 Nf6 5. O-O Be7
Bot evaluation: +0.2 (slight advantage to White)
Best move: d3 (35% confidence)
python main.py --analyze "rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq e3 0 1"
# Outputs detailed position analysis and best movesWe welcome contributions! Here's how to get started:
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Make your changes and add tests
- Run tests:
python -m pytest tests/ - Submit a pull request
# Install development dependencies
pip install -r requirements-dev.txt
# Run code formatting
black .
flake8 .
# Run type checking
mypy core/ training/| Issue | Solution |
|---|---|
| GPU not detected | Install CUDA-enabled PyTorch |
| Missing opening book | Download PGN files to data/opening_books/ |
| GUI freezing | Enable threading in config |
| Training slow | Use GPU and increase batch size |
- Use mixed precision training:
--mixed-precision - Increase batch size for GPU:
--batch-size 128 - Monitor with TensorBoard:
tensorboard --logdir=logs/
This project is licensed under the MIT License - see the LICENSE file for details.
- DeepMind AlphaZero for the self-play methodology
- Stockfish for benchmarking and inspiration
- python-chess library for chess logic
- PyTorch team for the ML framework
- Lichess for open chess databases
- ๐ Issues: GitHub Issues
- ๐ฌ Discussions: GitHub Discussions
- ๐ง Email: roshankumar0036@gmail.com
๐ฅ Ready to play chess against AI?

