Book Image

Hands-On Mathematics for Deep Learning

By : Jay Dawani
Book Image

Hands-On Mathematics for Deep Learning

By: Jay Dawani

Overview of this book

Most programmers and data scientists struggle with mathematics, having either overlooked or forgotten core mathematical concepts. This book uses Python libraries to help you understand the math required to build deep learning (DL) models. You'll begin by learning about core mathematical and modern computational techniques used to design and implement DL algorithms. This book will cover essential topics, such as linear algebra, eigenvalues and eigenvectors, the singular value decomposition concept, and gradient algorithms, to help you understand how to train deep neural networks. Later chapters focus on important neural networks, such as the linear neural network and multilayer perceptrons, with a primary focus on helping you learn how each model works. As you advance, you will delve into the math used for regularization, multi-layered DL, forward propagation, optimization, and backpropagation techniques to understand what it takes to build full-fledged DL models. Finally, you’ll explore CNN, recurrent neural network (RNN), and GAN models and their application. By the end of this book, you'll have built a strong foundation in neural networks and DL mathematical concepts, which will help you to confidently research and build custom models in DL.
Table of Contents (19 chapters)
1
Section 1: Essential Mathematics for Deep Learning
7
Section 2: Essential Neural Networks
13
Section 3: Advanced Deep Learning Concepts Simplified

Dataset augmentation

Deep feedforward networks, as we have learned, are very data-hungry and they use all this data to learn the underlying data distribution so that they can use their gained knowledge to make predictions on unseen data. This is because the more data they see, the more likely it is that what they encounter in the test set will be an interpolation of the distribution they have already learned. But getting a large enough dataset with good-quality labeled data is by no means a simple task (especially for certain problems where gathering data could end up being very costly). A method to circumvent this issue is using data augmentation; that is, generating synthetic data and using it to train our deep neural network.

The way synthetic data generation works is that we use a generative model (more on this in Chapter 12, Generative Models) to learn the underlying distribution...