- What are the main problems RNN suffers from when learning, and what are the solutions brought by LSTMs?
- What is a simpler alternative to LSTM memory cells? What are their advantages and disadvantages?
- You want to configure the lookback encoder-decoder from the Melody RNN to learn structures with a 3/4 time signature. How big is the binary step counter? How are the lookback distances configured for 3 lookback distances?
- You have the resulting vector, [0.10, 0.50, 0.00, 0.25], from the applied attention mask of [0.1, 0.5], with n = 2, and the previous step 1 of [1, 0, 0, 0] and step 2 of [0, 1, 0, x]. What is the value of x?
- You have the following the Polyphony RNN encoding: { (START), (NEW_NOTE, 67), (NEW_NOTE, 64), (NEW_NOTE, 60), (STEP_END), (CONTINUED_NOTE, 67), (CONTINUED_NOTE, 64), (CONTINUED_NOTE, 60), (STEP_END), (CONTINUED_NOTE, 67), (CONTINUED_NOTE, 64), (CONTINUED_NOTE...
Hands-On Music Generation with Magenta
By :
Hands-On Music Generation with Magenta
By:
Overview of this book
The importance of machine learning (ML) in art is growing at a rapid pace due to recent advancements in the field, and Magenta is at the forefront of this innovation. With this book, you’ll follow a hands-on approach to using ML models for music generation, learning how to integrate them into an existing music production workflow. Complete with practical examples and explanations of the theoretical background required to understand the underlying technologies, this book is the perfect starting point to begin exploring music generation.
The book will help you learn how to use the models in Magenta for generating percussion sequences, monophonic and polyphonic melodies in MIDI, and instrument sounds in raw audio. Through practical examples and in-depth explanations, you’ll understand ML models such as RNNs, VAEs, and GANs. Using this knowledge, you’ll create and train your own models for advanced music generation use cases, along with preparing new datasets. Finally, you’ll get to grips with integrating Magenta with other technologies, such as digital audio workstations (DAWs), and using Magenta.js to distribute music generation apps in the browser.
By the end of this book, you'll be well-versed with Magenta and have developed the skills you need to use ML models for music generation in your own style.
Table of Contents (16 chapters)
Preface
Section 1: Introduction to Artwork Generation
Free Chapter
Introduction to Magenta and Generative Art
Section 2: Music Generation with Machine Learning
Generating Drum Sequences with the Drums RNN
Generating Polyphonic Melodies
Latent Space Interpolation with MusicVAE
Audio Generation with NSynth and GANSynth
Section 3: Training, Learning, and Generating a Specific Style
Data Preparation for Training
Training Magenta Models
Section 4: Making Your Models Interact with Other Applications
Magenta in the Browser with Magenta.js
Making Magenta Interact with Music Applications
Assessments
Other Books You May Enjoy
Customer Reviews