Book Image

Practical Automated Machine Learning Using H2O.ai

By : Salil Ajgaonkar
Book Image

Practical Automated Machine Learning Using H2O.ai

By: Salil Ajgaonkar

Overview of this book

With the huge amount of data being generated over the internet and the benefits that Machine Learning (ML) predictions bring to businesses, ML implementation has become a low-hanging fruit that everyone is striving for. The complex mathematics behind it, however, can be discouraging for a lot of users. This is where H2O comes in – it automates various repetitive steps, and this encapsulation helps developers focus on results rather than handling complexities. You’ll begin by understanding how H2O’s AutoML simplifies the implementation of ML by providing a simple, easy-to-use interface to train and use ML models. Next, you’ll see how AutoML automates the entire process of training multiple models, optimizing their hyperparameters, as well as explaining their performance. As you advance, you’ll find out how to leverage a Plain Old Java Object (POJO) and Model Object, Optimized (MOJO) to deploy your models to production. Throughout this book, you’ll take a hands-on approach to implementation using H2O that’ll enable you to set up your ML systems in no time. By the end of this H2O book, you’ll be able to train and use your ML models using H2O AutoML, right from experimentation all the way to production without a single need to understand complex statistics or data science.
Table of Contents (19 chapters)
1
Part 1 H2O AutoML Basics
4
Part 2 H2O AutoML Deep Dive
10
Part 3 H2O AutoML Advanced Implementation and Productization

Exploring the H2O AutoML leaderboard performance metrics

In Chapter 2, Working with H2O Flow (H2O’s Web UI), once we trained the models on a dataset using H2O AutoML, the results of the models were stored in a leaderboard. The leaderboard was a table containing the model IDs and certain metric values for the respective models (see Figure 2.33).

The leaderboard ranks the models based on a default metric, which is ideally the second column in the table. The ranking metrics depend on what kind of prediction problem the models are trained on. The following list represents the ranking metrics used for the respective ML problems:

  • For binary classification problems, the ranking metric is AUC.
  • For multi-classification problems, the ranking metric is the mean per-class error.
  • For regression problems, the ranking metric is deviance.

Along with the ranking metrics, the leaderboard also provides some additional performance metrics for a better understanding of...