Book Image

Go Machine Learning Projects

By : Xuanyi Chew
Book Image

Go Machine Learning Projects

By: Xuanyi Chew

Overview of this book

Go is the perfect language for machine learning; it helps to clearly describe complex algorithms, and also helps developers to understand how to run efficient optimized code. This book will teach you how to implement machine learning in Go to make programs that are easy to deploy and code that is not only easy to understand and debug, but also to have its performance measured. The book begins by guiding you through setting up your machine learning environment with Go libraries and capabilities. You will then plunge into regression analysis of a real-life house pricing dataset and build a classification model in Go to classify emails as spam or ham. Using Gonum, Gorgonia, and STL, you will explore time series analysis along with decomposition and clean up your personal Twitter timeline by clustering tweets. In addition to this, you will learn how to recognize handwriting using neural networks and convolutional neural networks. Lastly, you'll learn how to choose the most appropriate machine learning algorithms to use for your projects with the help of a facial detection project. By the end of this book, you will have developed a solid machine learning mindset, a strong hold on the powerful Go toolkit, and a sound understanding of the practical implementations of machine learning algorithms in real-world projects.
Table of Contents (12 chapters)

Everything you know about neurons is wrong

In the previous chapter, I mentioned that everything you know about neural networks is wrong. Here, I repeat that claim. Most literature out there on a neural network starts with a comparison with biological neurones and ends there. This leads readers to often assume that it is. I'd like to make a point that artificial neural networks are nothing like their biological namesake.

Instead, in the last chapter, I spent a significant amount of the chapter describing linear algebra, and explained that the twist is that you can express almost any machine learning (ML) problem as linear algebra. I shall continue to do so in this chapter.

Rather than think of artificial neural networks as analogies of real-life neural networks, I personally encourage you to think of artificial neural networks as mathematical equations. The non-linearities...