Book Image

Machine Learning with Go Quick Start Guide

By : Michael Bironneau, Toby Coleman
Book Image

Machine Learning with Go Quick Start Guide

By: Michael Bironneau, Toby Coleman

Overview of this book

Machine learning is an essential part of today's data-driven world and is extensively used across industries, including financial forecasting, robotics, and web technology. This book will teach you how to efficiently develop machine learning applications in Go. The book starts with an introduction to machine learning and its development process, explaining the types of problems that it aims to solve and the solutions it offers. It then covers setting up a frictionless Go development environment, including running Go interactively with Jupyter notebooks. Finally, common data processing techniques are introduced. The book then teaches the reader about supervised and unsupervised learning techniques through worked examples that include the implementation of evaluation metrics. These worked examples make use of the prominent open-source libraries GoML and Gonum. The book also teaches readers how to load a pre-trained model and use it to make predictions. It then moves on to the operational side of running machine learning applications: deployment, Continuous Integration, and helpful advice for effective logging and monitoring. At the end of the book, readers will learn how to set up a machine learning project for success, formulating realistic success criteria and accurately translating business requirements into technical ones.
Table of Contents (9 chapters)

Types of ML algorithms

There are two main categories of ML algorithms: supervised learning and unsupervised learning. The decision of which type of algorithm to use depends on the data you have available and the project objectives.

Supervised learning problems

Supervised learning problems aim to infer the best mapping between an input and output dataset based on provided labeled pairs of input/output. The labeled dataset acts as feedback for the algorithm, allowing it to gauge the optimality of its solution. For example, given a list of mean yearly crude oil prices from 2010-2018, you may wish to predict the mean yearly crude oil price of 2019. The error that the algorithm makes on the 2010-2018 years will allow the engineer to estimate its error on the target prediction year of 2019.

A labeled pair consists of an input vector consisting of independent variables and an output vector consisting of dependent variables. For example, a labeled dataset for facial recognition might contain input vectors with facial image data alongside output vectors encoding the photographed persons name. A labeled set (or dataset) is a collection of labeled pairs.

Given a labeled collection of handwritten digits, you may wish to predict the label of a previously unseen handwritten digit. Similarly, given a dataset of emails that are labeled as being either spam or not spam, a company that wants to create a spam filter would want to predict whether a previously unseen message was spam. All these problems are supervised learning problems.

Supervised ML problems can be further divided into prediction and classification:

  • Classification attempts to label an unknown input sample with a known output value. For example, you could train an algorithm to recognize breeds of cats. The algorithm would classify an unknown cat by labeling it with a known breed.
  • By contrast, prediction algorithms attempt to label an unknown input sample with either a known or unknown output value. This is also known as estimation or regression. A canonical prediction problem is time series forecasting, where the output value of the series is predicted for a time value that was not previously seen.
A classification algorithm will try to associate an input sample with an item from a given list of output categories: for example, deciding whether a photo represents a cat, a dog, or neither is a classification problem. A prediction algorithm will map an input sample to a member of an output domain, which could be continuous: for example, attempting to guess a persons height from their weight and gender would be a prediction problem.

We will cover supervised algorithms in more detail in Chapter 3, Supervised Learning.

Unsupervised learning problems

Unsupervised learning problems aim to learn from data that has not been labeled. For example, given a dataset of market research data, a clustering algorithm can divide consumers into segments, saving time for marketing professionals. Given a dataset of medical scans, unsupervised classification algorithms can divide the image between different kinds of tissues for further analysis. One unsupervised learning approach known as dimensionality reduction works in conjunction with other algorithms, as a pre-processing step, to reduce the volume of data that another algorithm will have to be trained on, cutting down training times. We will cover unsupervised learning algorithms in more detail in Chapter 4, Unsupervised Learning.

Most ML algorithms can be efficiently implemented in a wide range of programming languages. While Python has been a favorite of data scientists for its ease of use and plethora of open source libraries, Go presents significant advantages for a developer creating a commercial ML application.