Book Image

Hands-On Deep Learning Algorithms with Python

By : Sudharsan Ravichandiran
Book Image

Hands-On Deep Learning Algorithms with Python

By: Sudharsan Ravichandiran

Overview of this book

Deep learning is one of the most popular domains in the AI space that allows you to develop multi-layered models of varying complexities. This book introduces you to popular deep learning algorithms—from basic to advanced—and shows you how to implement them from scratch using TensorFlow. Throughout the book, you will gain insights into each algorithm, the mathematical principles involved, and how to implement it in the best possible manner. The book starts by explaining how you can build your own neural networks, followed by introducing you to TensorFlow, the powerful Python-based library for machine learning and deep learning. Moving on, you will get up to speed with gradient descent variants, such as NAG, AMSGrad, AdaDelta, Adam, and Nadam. The book will then provide you with insights into recurrent neural networks (RNNs) and LSTM and how to generate song lyrics with RNN. Next, you will master the math necessary to work with convolutional and capsule networks, widely used for image recognition tasks. You will also learn how machines understand the semantics of words and documents using CBOW, skip-gram, and PV-DM. Finally, you will explore GANs, including InfoGAN and LSGAN, and autoencoders, such as contractive autoencoders and VAE. By the end of this book, you will be equipped with all the skills you need to implement deep learning in your own projects.
Table of Contents (17 chapters)
Free Chapter
1
Section 1: Getting Started with Deep Learning
4
Section 2: Fundamental Deep Learning Algorithms
10
Section 3: Advanced Deep Learning Algorithms

Chapter 3 - Gradient Descent and Its Variants

  1. Unlike gradient descent, in SGD, in order to update the parameter, we don't have to iterate through all the data points in our training set. Instead, we just iterate through a single data point. That is, unlike gradient descent, we don't have to wait to update the parameter of the model after iterating all the data points in our training set. We just update the parameters of the model after iterating through every single data point in our training set.
  2. In mini-batch gradient descent, instead of updating the parameters after iterating each training sample, we update the parameters after iterating some batches of data points. Let's say the batch size is 50, which means that we update the parameter of the model after iterating through 50 data points, instead of updating the parameter after iterating through each individual...