Book Image

Practical Automated Machine Learning Using H2O.ai

By : Salil Ajgaonkar
Book Image

Practical Automated Machine Learning Using H2O.ai

By: Salil Ajgaonkar

Overview of this book

With the huge amount of data being generated over the internet and the benefits that Machine Learning (ML) predictions bring to businesses, ML implementation has become a low-hanging fruit that everyone is striving for. The complex mathematics behind it, however, can be discouraging for a lot of users. This is where H2O comes in – it automates various repetitive steps, and this encapsulation helps developers focus on results rather than handling complexities. You’ll begin by understanding how H2O’s AutoML simplifies the implementation of ML by providing a simple, easy-to-use interface to train and use ML models. Next, you’ll see how AutoML automates the entire process of training multiple models, optimizing their hyperparameters, as well as explaining their performance. As you advance, you’ll find out how to leverage a Plain Old Java Object (POJO) and Model Object, Optimized (MOJO) to deploy your models to production. Throughout this book, you’ll take a hands-on approach to implementation using H2O that’ll enable you to set up your ML systems in no time. By the end of this H2O book, you’ll be able to train and use your ML models using H2O AutoML, right from experimentation all the way to production without a single need to understand complex statistics or data science.
Table of Contents (19 chapters)
1
Part 1 H2O AutoML Basics
4
Part 2 H2O AutoML Deep Dive
10
Part 3 H2O AutoML Advanced Implementation and Productization

Understanding the Gradient Boosting Machine algorithm

Gradient Boosting Machine (GBM) is a forward learning ensemble ML algorithm that works on both classification as well as regression. The GBM model is an ensemble model just like the DRF algorithm in the sense that the GBM model, as a whole, is a combination of multiple weak learner models whose results are aggregated and presented as a GBM prediction. GBM works similarly to DRF in that it consists of multiple decision trees that are built in a sequence that sequentially minimizes the error.

GBM can be used to predict continuous numerical values, as well as to classify data. If GBM is used to predict continuous numerical values, we say that we are using GBM for regression. If we are using GBM to classify data, then we say we are using GBM for classification.

The GBM algorithm has a foundation on decision trees, just like DRF. However, how the decision trees are built is different compared to DRF.

Let’s try to understand...