-
-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Block-Reign is a lightweight player-versus-artificial intelligence (AI) grid-based battle game in which the AI continuously learns from matches played against human players. The game combines a simple browser-based interface with reinforcement learning techniques on the backend.
The project is primarily designed as an experimental platform to demonstrate Q-learning and deep reinforcement learning (DQN) in a real-time interactive game environment.
Block-Reign is played on a 10×10 grid where the player competes against an AI opponent.
Player and AI can move across the grid.
Both sides can shoot to eliminate the opponent.
A match ends when one side defeats the other.
Arrow Keys – Move
Space Bar – Shoot
The game runs locally in a web browser while the logic and AI operate on a Python server.
The live game server uses a Q-learning–based AI that:
Maintains a Q-table mapping states to actions
Learns directly from completed matches
Persists training data between sessions
After each match, the client sends match results to the server, allowing the AI to update its policy and improve over time.
AI progress is saved automatically under:
training/models/simple_ai.pkl
This enables long-term learning across multiple play sessions.
For experimentation, the project includes a Deep Q-Network (DQN) trainer implemented in ai_trainer.py. This trainer supports:
-
Batch training
-
Replay buffers
-
Neural-network-based policy learning
-
The DQN trainer is not used by the live game server by default.
As of the latest recorded training state:
-
Win rate: 59%
-
Matches played: 78
-
Wins: 46
-
Losses: 36
-
Latest result: AI victory
These statistics reflect the persisted Q-learning model.
Python 3.8 or later
pip package manager
Running locally
Install dependencies:
pip install -r requirements.txt
Start the game server:
python3 game_server.py
Open a browser and navigate to:
http://localhost:5000
The game runs entirely on a local machine.
Resetting AI Training
To reset the AI and start with an untrained model:
Delete the entire training directory:
rm -rf training/
Or delete only the saved model file:
rm training/models/simple_ai.pkl
After resetting, the AI begins learning again from new matches.
AI models and logs are stored under training/models
Runtime statistics are printed by the server during startup and after matches
The training directory must be writable for proper operation
Contributions are encouraged. Developers can fork the repository, implement changes, and submit pull requests. Feature additions should include tests and updated documentation.
Block-Reign is distributed under the license specified in the project’s LICENSE file.