Book Image

Machine Learning With Go

Book Image

Machine Learning With Go

Overview of this book

The mission of this book is to turn readers into productive, innovative data analysts who leverage Go to build robust and valuable applications. To this end, the book clearly introduces the technical aspects of building predictive models in Go, but it also helps the reader understand how machine learning workflows are being applied in real-world scenarios. Machine Learning with Go shows readers how to be productive in machine learning while also producing applications that maintain a high level of integrity. It also gives readers patterns to overcome challenges that are often encountered when trying to integrate machine learning in an engineering organization. The readers will begin by gaining a solid understanding of how to gather, organize, and parse real-work data from a variety of sources. Readers will then develop a solid statistical toolkit that will allow them to quickly understand gain intuition about the content of a dataset. Finally, the readers will gain hands-on experience implementing essential machine learning techniques (regression, classification, clustering, and so on) with the relevant Go packages. Finally, the reader will have a solid machine learning mindset and a powerful Go toolkit of techniques, packages, and example implementations.
Table of Contents (11 chapters)

Backpropagation

Chapter 8, Neural Networks and Deep Learning, included an example of a neural network built from scratch in Go. This neural network included an implementation of the backpropagation method to train neural networks, which can be found in almost any neural network code. We discussed some details in that chapter. However, this method is utilized so often that we wanted to go through it step by step here.

To train a neural network with backpropagation, we do the following for each of a series of epochs:

  1. Feed the training data through the neural network to produce output.
  2. Calculate an error between the expected output and the predicted output.
  3. Based on the error, calculate updates for the neural network weights and biases.
  4. Propagate these updates back into the network.

As a reminder, our implementation of this procedure for a network with a single hidden layer looked...