Book Image

Hands-On Deep Learning Algorithms with Python

By : Sudharsan Ravichandiran
Book Image

Hands-On Deep Learning Algorithms with Python

By: Sudharsan Ravichandiran

Overview of this book

Deep learning is one of the most popular domains in the AI space that allows you to develop multi-layered models of varying complexities. This book introduces you to popular deep learning algorithms—from basic to advanced—and shows you how to implement them from scratch using TensorFlow. Throughout the book, you will gain insights into each algorithm, the mathematical principles involved, and how to implement it in the best possible manner. The book starts by explaining how you can build your own neural networks, followed by introducing you to TensorFlow, the powerful Python-based library for machine learning and deep learning. Moving on, you will get up to speed with gradient descent variants, such as NAG, AMSGrad, AdaDelta, Adam, and Nadam. The book will then provide you with insights into recurrent neural networks (RNNs) and LSTM and how to generate song lyrics with RNN. Next, you will master the math necessary to work with convolutional and capsule networks, widely used for image recognition tasks. You will also learn how machines understand the semantics of words and documents using CBOW, skip-gram, and PV-DM. Finally, you will explore GANs, including InfoGAN and LSGAN, and autoencoders, such as contractive autoencoders and VAE. By the end of this book, you will be equipped with all the skills you need to implement deep learning in your own projects.
Table of Contents (17 chapters)
Free Chapter
1
Section 1: Getting Started with Deep Learning
4
Section 2: Fundamental Deep Learning Algorithms
10
Section 3: Advanced Deep Learning Algorithms

Generating song lyrics using RNNs

We have learned enough about RNNs; now, we will look at how to generate song lyrics using RNNs. To do this, we simply build a character-level RNN, meaning that on every time step, we predict a new character.

Let's consider a small sentence, What a beautiful d.

At the first time step, the RNN predicts a new character as a. The sentence will be updated to, What a beautiful da.

At the next time step, it predicts a new character as y, and the sentence becomes, What a beautiful day.

In this manner, we predict a new character at each time step and generate a song. Instead of predicting a new character every time, we can also predict a new word every time, which is called word level RNN. For simplicity, let's start with a character level RNN.

But how does RNN predicts a new character on each time step? Let's suppose at a time step t=0...