This chapter is important because it introduces the basic concepts of music generation with machine learning, all of which we'll build upon throughout this book.
In this chapter, we learned what generative music is and that its origins predate even the advent of computers. By looking at specific examples, we saw different types of generative music: random, algorithmic, and stochastic.
We also learned how machine learning is rapidly transforming how we generate music. By introducing music representation and various processes, we learned about MIDI, waveforms, and spectrograms, as well as various neural network architectures we'll get to look at throughout this book.
Finally, we saw an overview of what we can do with Magenta in terms of generating and processing image, audio, and score. By doing that, we introduced the primary models we'll be using throughout this book; that is, Drums RNN, Melody RNN, MusicVAE, NSynth, and others.
You also installed your development environment for this book and generated your first musical score. Now, we're ready to go!
The next chapter will delve deeper into some of the concepts we introduced in this chapter. We'll explain what an RNN is and why it is important for music generation. Then, we'll use the Drums RNN model on the command line and in Python while explaining its inputs and outputs. We'll finish by creating the first building block of our autonomous music generating system.