Book Image

Go Machine Learning Projects

By : Xuanyi Chew
Book Image

Go Machine Learning Projects

By: Xuanyi Chew

Overview of this book

Go is the perfect language for machine learning; it helps to clearly describe complex algorithms, and also helps developers to understand how to run efficient optimized code. This book will teach you how to implement machine learning in Go to make programs that are easy to deploy and code that is not only easy to understand and debug, but also to have its performance measured. The book begins by guiding you through setting up your machine learning environment with Go libraries and capabilities. You will then plunge into regression analysis of a real-life house pricing dataset and build a classification model in Go to classify emails as spam or ham. Using Gonum, Gorgonia, and STL, you will explore time series analysis along with decomposition and clean up your personal Twitter timeline by clustering tweets. In addition to this, you will learn how to recognize handwriting using neural networks and convolutional neural networks. Lastly, you'll learn how to choose the most appropriate machine learning algorithms to use for your projects with the help of a facial detection project. By the end of this book, you will have developed a solid machine learning mindset, a strong hold on the powerful Go toolkit, and a sound understanding of the practical implementations of machine learning algorithms in real-world projects.
Table of Contents (12 chapters)

Summary

In this chapter, I have shown the basics of what a Naive Bayes classifier looks like—a classifier written with the fundamental understanding of statistics will trump any publicly available library any day.

The classifier itself is fewer than 100 lines of code, but with it comes a great deal of power. Being able to perform classification with 98% or greater accuracy is no mean feat.

A note on the 98% figure: This is not state of the art. State of the art is in the high 99.xx%. The main reason why there is a race for that final percent is because of scale. Imagine you're Google and you're running Gmail. A 0.01% error means millions of emails being misclassified. That means many unhappy customers.

For the most part, in machine learning, the case of whether to go for newer untested methods really depends on the scale of your problems. In my experience from...