Book Image

Python Deep Learning Projects

By : Matthew Lamons, Rahul Kumar, Abhishek Nagaraja
Book Image

Python Deep Learning Projects

By: Matthew Lamons, Rahul Kumar, Abhishek Nagaraja

Overview of this book

Deep learning has been gradually revolutionizing every field of artificial intelligence, making application development easier. Python Deep Learning Projects imparts all the knowledge needed to implement complex deep learning projects in the field of computational linguistics and computer vision. Each of these projects is unique, helping you progressively master the subject. You’ll learn how to implement a text classifier system using a recurrent neural network (RNN) model and optimize it to understand the shortcomings you might experience while implementing a simple deep learning system. Similarly, you’ll discover how to develop various projects, including word vector representation, open domain question answering, and building chatbots using seq-to-seq models and language modeling. In addition to this, you’ll cover advanced concepts, such as regularization, gradient clipping, gradient normalization, and bidirectional RNNs, through a series of engaging projects. By the end of this book, you will have gained knowledge to develop your own deep learning systems in a straightforward way and in an efficient way
Table of Contents (17 chapters)
8
Handwritten Digits Classification Using ConvNets

DS2 model description and intuition

DS2 architecture is composed of many layers of recurrent connections, convolutional filters, and non-linearities, as well as the impact of a specific instance of batch normalization, applied to RNNs, as shown here:

To learn from datasets with a large amount of data, DS2 model's capacity is increased by adding more depth. The architectures are made up to 11 layers of many bidirectional recurrent layers and convolutional layers. To optimize these models successfully, batch normalization for RNNs and a novel optimization curriculum called SortaGrad were used.

The training data is a combination of input sequence x(i) and the transcript y(i), whereas the goal of the RNN layers is to learn the features between x(i) and y(i):

training set X =  {(x(1), y(1)), (x(2), y(2)), . . .}
utterance = x(i)
label = y(i)

The spectrogram of power normalized...