Magenta comes with multiple command-line scripts (installed in the bin folder of your Magenta environment). Basically, each model has its own console script for dataset preparation, model training, and generation. Let's take a look:
- While in the Magenta environment, download the Drums RNN pre-trained model, drum_kit_rnn:
> curl --output "drum_kit_rnn.mag" "http://download.magenta.tensorflow.org/models/drum_kit_rnn.mag"
- Then, use the following command to generate your first few MIDI files:
> drums_rnn_generate --bundle_file="drum_kit_rnn.mag"
By default, the preceding command generates the files in /tmp/drums_rnn/generated (on Windows C:\tmp\drums_rnn\generated). You should see 10 new MIDI files, along with timestamps and a generation index.
If you are using a GPU, you can verify if TensorFlow is using it properly by searching for "Created TensorFlow device ... -> physical GPU (name: ..., compute capability: ...)" in the output of the script. If it's not there, this means it is executing on your CPU.
You can also check your GPU usage while Magenta is executing, which should go up if Magenta is using the GPU properly.
You can also check your GPU usage while Magenta is executing, which should go up if Magenta is using the GPU properly.
- Finally, to listen to the generated MIDI, use your software synthesizer or MuseScore. For the software synth, refer to the following command, depending on your platform, and replace PATH_TO_SF2 and PATH_TO_MIDI with the proper values:
- Linux: fluidsynth -a pulseaudio -g 1 -n -i PATH_TO_SF2 PATH_TO_MIDI
- macOS: fluidsynth -a coreaudio -g 1 -n -i PATH_TO_SF2 PATH_TO_MIDI
- Windows: fluidsynth -g 1 -n -i PATH_TO_SF2 PATH_TO_MIDI
Congratulations! You have generated your first musical score using a machine learning model! You'll learn how to generate much more throughout this book.