Book Image

Practical Automated Machine Learning Using H2O.ai

By : Salil Ajgaonkar
Book Image

Practical Automated Machine Learning Using H2O.ai

By: Salil Ajgaonkar

Overview of this book

With the huge amount of data being generated over the internet and the benefits that Machine Learning (ML) predictions bring to businesses, ML implementation has become a low-hanging fruit that everyone is striving for. The complex mathematics behind it, however, can be discouraging for a lot of users. This is where H2O comes in – it automates various repetitive steps, and this encapsulation helps developers focus on results rather than handling complexities. You’ll begin by understanding how H2O’s AutoML simplifies the implementation of ML by providing a simple, easy-to-use interface to train and use ML models. Next, you’ll see how AutoML automates the entire process of training multiple models, optimizing their hyperparameters, as well as explaining their performance. As you advance, you’ll find out how to leverage a Plain Old Java Object (POJO) and Model Object, Optimized (MOJO) to deploy your models to production. Throughout this book, you’ll take a hands-on approach to implementation using H2O that’ll enable you to set up your ML systems in no time. By the end of this H2O book, you’ll be able to train and use your ML models using H2O AutoML, right from experimentation all the way to production without a single need to understand complex statistics or data science.
Table of Contents (19 chapters)
1
Part 1 H2O AutoML Basics
4
Part 2 H2O AutoML Deep Dive
10
Part 3 H2O AutoML Advanced Implementation and Productization

Summary

In this chapter, we focused on understanding the model explainability interface provided by H2O. First, we understood how the explainability interface provides different explainability features that help users get detailed information about the models trained. Then, we learned how to implement this functionality on models trained by H2O’s AutoML in both Python and R.

Once we were comfortable with its implementation, we started exploring and understanding the various explainability graphs displayed by the explainability interface’s output, starting with residual analysis. We observed how residual analysis helps highlight heteroscedasticity in the dataset and how it helps you identify if there is any missing information in your dataset.

Then, we explored variable importance and how it helps you identify important features in the dataset. Building on top of this, we learned how feature importance heatmaps can help you observe feature importance among all the...