Book Image

Hands-On Mathematics for Deep Learning

By : Jay Dawani
Book Image

Hands-On Mathematics for Deep Learning

By: Jay Dawani

Overview of this book

Most programmers and data scientists struggle with mathematics, having either overlooked or forgotten core mathematical concepts. This book uses Python libraries to help you understand the math required to build deep learning (DL) models. You'll begin by learning about core mathematical and modern computational techniques used to design and implement DL algorithms. This book will cover essential topics, such as linear algebra, eigenvalues and eigenvectors, the singular value decomposition concept, and gradient algorithms, to help you understand how to train deep neural networks. Later chapters focus on important neural networks, such as the linear neural network and multilayer perceptrons, with a primary focus on helping you learn how each model works. As you advance, you will delve into the math used for regularization, multi-layered DL, forward propagation, optimization, and backpropagation techniques to understand what it takes to build full-fledged DL models. Finally, you’ll explore CNN, recurrent neural network (RNN), and GAN models and their application. By the end of this book, you'll have built a strong foundation in neural networks and DL mathematical concepts, which will help you to confidently research and build custom models in DL.
Table of Contents (19 chapters)
1
Section 1: Essential Mathematics for Deep Learning
7
Section 2: Essential Neural Networks
13
Section 3: Advanced Deep Learning Concepts Simplified

Adjacency matrix

As you can imagine, writing down all the pairs of connected nodes (that is, those that have edges between them) to keep track of the relationships in a graph can get tedious, especially as graphs can get very large. For this reason, we use what is known as the adjacency matrix, which is the fundamental mathematical representation of a graph.

Let's suppose we have a graph with n nodes, each of which has a unique integer label () so that we can refer to it easily and without any ambiguity whatsoever. For the sake of simplicity, in this example, n = 6. Then, this graph's corresponding adjacency matrix is as follows:

Let's take a look at the matrix for a moment and see why it is the way it is. The first thing that immediately pops out is that the matrix has a size of 6 × 6 (or n × n) because size is important to us. Next, we notice that...