Book Image

Hands-On Mathematics for Deep Learning

By : Jay Dawani
Book Image

Hands-On Mathematics for Deep Learning

By: Jay Dawani

Overview of this book

Most programmers and data scientists struggle with mathematics, having either overlooked or forgotten core mathematical concepts. This book uses Python libraries to help you understand the math required to build deep learning (DL) models. You'll begin by learning about core mathematical and modern computational techniques used to design and implement DL algorithms. This book will cover essential topics, such as linear algebra, eigenvalues and eigenvectors, the singular value decomposition concept, and gradient algorithms, to help you understand how to train deep neural networks. Later chapters focus on important neural networks, such as the linear neural network and multilayer perceptrons, with a primary focus on helping you learn how each model works. As you advance, you will delve into the math used for regularization, multi-layered DL, forward propagation, optimization, and backpropagation techniques to understand what it takes to build full-fledged DL models. Finally, you’ll explore CNN, recurrent neural network (RNN), and GAN models and their application. By the end of this book, you'll have built a strong foundation in neural networks and DL mathematical concepts, which will help you to confidently research and build custom models in DL.
Table of Contents (19 chapters)
1
Section 1: Essential Mathematics for Deep Learning
7
Section 2: Essential Neural Networks
13
Section 3: Advanced Deep Learning Concepts Simplified

The need for RNNs

In the previous chapter, we learned about CNNs and their effectiveness on image- and time series-related tasks that have data with a grid-like structure. We also saw how CNNs are inspired by how the human visual cortex processes visual input. Similarly, the RNNs that we will learn about in this chapter are also biologically inspired.

The need for this form of neural network arises from the fact that fuzzy neural networks (FNNs) are unable to capture time-based dependencies in data.

The first model of an RNN was created by John Hopfield in 1982 in an attempt to understand how associative memory in our brains works. This is known as a Hopfield network. It is a fully connected single-layer recurrent network and it stores and accesses information similarly to how we think our brains do.