This project trains an LSTM model to generate short classical music melodies in MIDI format. It tokenizes MIDI files into pitch–duration sequences, learns musical structure, and then samples new melodies.
This is an experimental project I created to explore the intersection of machine learning and music.
- git clone https://github.com/Will-Turchin/MidiAI
- cd MidiAI
- Primary ones include pretty_midi and pytorch
- Ensure you have a Python environment with at least version 3.8
- Run the test.py program to generate 10 midi files (~200 tokens in length) into the generated songs folder
- To modify how 'creative' the model is adjust the temperature parameter found inside test.py
- To retrain the model, use the test.py program, you can modify parameters such as the number of epochs to run inside of this program
- To add/remove training data, add midi files into the training_data directory
- The model occasionally reproduces recognizable classical phrases if overtrained.
- Output is monophonic (one note at a time).
- This is a prototype and not intended for production use.
- Support for polyphonic melodies
- Adding style‑conditioning (e.g., composer or era)
- More robust evaluation metrics for creativity vs accuracy