Skip to content

A sophisticated AI chess engine powered by deep learning and Monte Carlo Tree Search (MCTS). Features a responsive GUI, opening books, endgame tablebases, and AlphaZero-style self-play training.

License

Notifications You must be signed in to change notification settings

roshankumar0036singh/Chess-ML-Bot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

5 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Chess ML Bot ๐Ÿค–โ™Ÿ๏ธ

Python PyTorch License GitHub Stars

A sophisticated AI chess engine powered by deep learning and Monte Carlo Tree Search (MCTS). Features a responsive GUI, opening books, endgame tablebases, and AlphaZero-style self-play training.

Chess Bot Demo

๐ŸŒŸ Features

๐Ÿง  AI Engine

  • Deep Neural Network: Custom PyTorch CNN with residual blocks for position evaluation
  • Monte Carlo Tree Search: Intelligent move selection with 800+ simulations per move
  • Opening Book: Million+ position database from master games
  • Endgame Tablebases: Perfect play using Syzygy tablebases
  • Self-Play Learning: Continuous improvement through reinforcement learning

๐ŸŽฎ Interactive GUI

  • Smooth Gameplay: Pygame-based responsive interface
  • Visual Feedback: Move highlighting, legal moves, and thinking animations
  • Real-time Stats: Live move history, evaluation scores, and game analysis
  • Non-blocking UI: Threaded bot calculations keep interface responsive

๐Ÿš€ Training Pipeline

  • Supervised Learning: Train on PGN databases of master games
  • Reinforcement Learning: Generate training data through self-play
  • Model Checkpointing: Automatic saving and version management
  • Performance Monitoring: TensorBoard integration with loss tracking

๐Ÿ“Š Performance

Metric Value
Estimated Elo 1800-2200
Policy Loss 6.5 โ†’ 2.8 (after training)
Value Loss 1.0 โ†’ 0.86 (after training)
Search Speed 800 simulations in 2-5s
Opening Positions 500,000+

๐Ÿš€ Quick Start

Installation

# Clone repository
git clone https://github.com/your-username/chess-ml-bot.git
cd chess-ml-bot

# Create virtual environment
python -m venv chess_env
source chess_env/bin/activate  # Windows: chess_env\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Install CUDA PyTorch (for GPU training)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Play Against the Bot

python main.py --interface gui

Train Your Own Model

# Supervised training
python train_model.py --mode supervised --epochs 30 --batch-size 64

# Self-play training
python train_model.py --mode self_play --games 100

๐ŸŽฎ Usage

GUI Controls

  • Click: Select and move pieces
  • N: New game
  • U: Undo move
  • F: Flip board
  • A: Analysis mode

Command Line

# CLI gameplay
python main.py --interface cli

# Analysis mode
python main.py --analyze position.fen

# Tournament mode
python tournament.py --games 50

๐Ÿ—๏ธ Architecture

๐Ÿ“ฆ chess-ml-bot/
โ”œโ”€โ”€ ๐Ÿง  core/                 # AI Engine
โ”‚   โ”œโ”€โ”€ engine.py           # Main chess engine
โ”‚   โ”œโ”€โ”€ neural_net.py       # PyTorch neural network
โ”‚   โ”œโ”€โ”€ search.py           # MCTS implementation
โ”‚   โ””โ”€โ”€ evaluation.py       # Position evaluation
โ”œโ”€โ”€ ๐ŸŽฎ ui/                   # User Interfaces
โ”‚   โ”œโ”€โ”€ gui.py              # Pygame GUI
โ”‚   โ””โ”€โ”€ cli.py              # Command line
โ”œโ”€โ”€ ๐Ÿš€ training/             # ML Training
โ”‚   โ”œโ”€โ”€ trainer.py          # Training pipeline
โ”‚   โ”œโ”€โ”€ reinforcement.py    # Self-play learning
โ”‚   โ””โ”€โ”€ data_loader.py      # Data processing
โ”œโ”€โ”€ โšก features/             # Advanced Features
โ”‚   โ”œโ”€โ”€ opening_book.py     # Opening database
โ”‚   โ”œโ”€โ”€ tablebase.py        # Endgame tablebases
โ”‚   โ””โ”€โ”€ time_manager.py     # Time allocation
โ””โ”€โ”€ ๐Ÿ“Š data/                 # Data Storage
    โ”œโ”€โ”€ models/             # Trained models
    โ”œโ”€โ”€ opening_books/      # PGN databases
    โ””โ”€โ”€ training_data/      # Training datasets

๐Ÿง  Neural Network

The model uses a ResNet-inspired architecture:

Input: 14ร—8ร—8 board representation
โ”œโ”€โ”€ Convolutional layers (3ร—3 kernels)
โ”œโ”€โ”€ 12ร— Residual blocks (256 filters each)
โ”œโ”€โ”€ Batch normalization + ReLU activation
โ””โ”€โ”€ Dual heads:
    โ”œโ”€โ”€ Policy head โ†’ 4096 possible moves
    โ””โ”€โ”€ Value head โ†’ Position evaluation (-1 to +1)

๐Ÿ“ˆ Training Results

Training Loss

Loss Progression (10 epochs)

  • Policy Loss: 6.53 โ†’ 2.80 (-57%)
  • Value Loss: 1.02 โ†’ 0.86 (-16%)
  • Total Loss: 7.54 โ†’ 3.66 (-51%)

๐ŸŽฏ Getting Started with Training

1. Prepare Data

# Download master games (example: Lichess database)
wget https://database.lichess.org/standard/lichess_db_standard_rated_2023-01.pgn.bz2
bunzip2 lichess_db_standard_rated_2023-01.pgn.bz2
mv lichess_db_standard_rated_2023-01.pgn data/opening_books/master_games.pgn

2. Configure Training

Edit config.json:

{
    "model": {
        "layers": 12,
        "channels": 256,
        "learning_rate": 0.001
    },
    "training": {
        "epochs": 30,
        "batch_size": 64,
        "device": "cuda"
    }
}

3. Start Training

python train_model.py --config config.json

๐Ÿ”ง Advanced Usage

Custom Network Architecture

from core.neural_net import ChessNet

# Create custom model
model = ChessNet(
    input_channels=14,
    residual_blocks=20,
    filters=512
)

Engine Integration

from core.engine import ChessEngine

# Initialize engine
engine = ChessEngine()
best_move = engine.get_best_move()

Self-Play Training

from training.reinforcement import SelfPlayLearning

# Start self-play
trainer = SelfPlayLearning(model)
trainer.run_self_play(num_games=1000)

๐ŸŽช Demo & Examples

Example Game

1. e4 e5 2. Nf3 Nc6 3. Bb5 a6 4. Ba4 Nf6 5. O-O Be7
Bot evaluation: +0.2 (slight advantage to White)
Best move: d3 (35% confidence)

Analysis Mode

python main.py --analyze "rnbqkbnr/pppppppp/8/8/4P3/8/PPPP1PPP/RNBQKBNR b KQkq e3 0 1"
# Outputs detailed position analysis and best moves

๐Ÿค Contributing

We welcome contributions! Here's how to get started:

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature-name
  3. Make your changes and add tests
  4. Run tests: python -m pytest tests/
  5. Submit a pull request

Development Setup

# Install development dependencies
pip install -r requirements-dev.txt

# Run code formatting
black .
flake8 .

# Run type checking
mypy core/ training/

๐Ÿ› Troubleshooting

Common Issues

Issue Solution
GPU not detected Install CUDA-enabled PyTorch
Missing opening book Download PGN files to data/opening_books/
GUI freezing Enable threading in config
Training slow Use GPU and increase batch size

Performance Tips

  • Use mixed precision training: --mixed-precision
  • Increase batch size for GPU: --batch-size 128
  • Monitor with TensorBoard: tensorboard --logdir=logs/

๐Ÿ“œ License

This project is licensed under the MIT License - see the LICENSE file for details.

๐Ÿ™ Acknowledgments

  • DeepMind AlphaZero for the self-play methodology
  • Stockfish for benchmarking and inspiration
  • python-chess library for chess logic
  • PyTorch team for the ML framework
  • Lichess for open chess databases

๐Ÿ“ž Support

โญ Star History

Star History Chart


๐Ÿ”ฅ Ready to play chess against AI?

Play Now โ€ข Documentation

About

A sophisticated AI chess engine powered by deep learning and Monte Carlo Tree Search (MCTS). Features a responsive GUI, opening books, endgame tablebases, and AlphaZero-style self-play training.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages