Book Image

Hands-On Music Generation with Magenta

By : Alexandre DuBreuil
Book Image

Hands-On Music Generation with Magenta

By: Alexandre DuBreuil

Overview of this book

The importance of machine learning (ML) in art is growing at a rapid pace due to recent advancements in the field, and Magenta is at the forefront of this innovation. With this book, you’ll follow a hands-on approach to using ML models for music generation, learning how to integrate them into an existing music production workflow. Complete with practical examples and explanations of the theoretical background required to understand the underlying technologies, this book is the perfect starting point to begin exploring music generation. The book will help you learn how to use the models in Magenta for generating percussion sequences, monophonic and polyphonic melodies in MIDI, and instrument sounds in raw audio. Through practical examples and in-depth explanations, you’ll understand ML models such as RNNs, VAEs, and GANs. Using this knowledge, you’ll create and train your own models for advanced music generation use cases, along with preparing new datasets. Finally, you’ll get to grips with integrating Magenta with other technologies, such as digital audio workstations (DAWs), and using Magenta.js to distribute music generation apps in the browser. By the end of this book, you'll be well-versed with Magenta and have developed the skills you need to use ML models for music generation in your own style.
Table of Contents (16 chapters)
1
Section 1: Introduction to Artwork Generation
3
Section 2: Music Generation with Machine Learning
8
Section 3: Training, Learning, and Generating a Specific Style
11
Section 4: Making Your Models Interact with Other Applications

Chapter 2: Generating Drum Sequences with the Drums RNN

  1. Given a current sequence, predict the score for the next note, then do a prediction for each step you want to generate.

  2. (1) RNNs operate on sequences of vectors, for the input and output, which is good for sequential data such as a music score, and (2) keep an internal state composed of the previous output steps, which is good for doing a prediction based on past inputs, not only the current input.
  3. (1) First, the hidden layer will get h(t + 1), which is the output of the previous hidden layer, and (2) it will also receive x(t + 2), which is the input of the current step.
  4. The number of bars generated will be 2 bars, or 32 steps, since we have 16 steps per bar. At 80 QPM, each step takes 0.1875 seconds, because you take the number of seconds in a minute, divide by the QPM, and divide by the number of steps per quarter: 60...