Book Image

Machine Learning Quick Reference

By : Rahul Kumar
Book Image

Machine Learning Quick Reference

By: Rahul Kumar

Overview of this book

Machine learning makes it possible to learn about the unknowns and gain hidden insights into your datasets by mastering many tools and techniques. This book guides you to do just that in a very compact manner. After giving a quick overview of what machine learning is all about, Machine Learning Quick Reference jumps right into its core algorithms and demonstrates how they can be applied to real-world scenarios. From model evaluation to optimizing their performance, this book will introduce you to the best practices in machine learning. Furthermore, you will also look at the more advanced aspects such as training neural networks and work with different kinds of data, such as text, time-series, and sequential data. Advanced methods and techniques such as causal inference, deep Gaussian processes, and more are also covered. By the end of this book, you will be able to train fast, accurate machine learning models at your fingertips, which you can easily use as a point of reference.
Table of Contents (18 chapters)
Title Page
Copyright and Credits
About Packt
Contributors
Preface
Index

Backward propagation


The purpose of backpropagation is to update each of the weights in the network so that they cause the actual output to be closer to the target output, thereby minimizing the error for each output neuron and the network as a whole.

Let's focus on an output layer first. We are supposed to find out the impact of change in w5 on the total error.

This will be decided by 

. It is the partial derivative of Etotal with respect to w5.

Let's apply the chain rule here:

 

= 0.690966 – 0.9 = -0.209034

 

= 0.213532

InputOL1 = w5*OutputHL1 + w7*OutputHL2 + B2

= 0.650219

Now, let's get back to the old equation:

To update the weight, we will use the following formula. We have set the learning rate to be α = 0.1:

Similarly, 

 are supposed to be calculated. The approach remains the same. We will leave this to compute as it will help you in understanding the concepts better.

When it comes down to the hidden layer and computing, the approach still remains the same. However, the formula will change a bit...