Book Image

Machine Learning at Scale with H2O

By : Gregory Keys, David Whiting
Book Image

Machine Learning at Scale with H2O

By: Gregory Keys, David Whiting

Overview of this book

H2O is an open source, fast, and scalable machine learning framework that allows you to build models using big data and then easily productionalize them in diverse enterprise environments. Machine Learning at Scale with H2O begins with an overview of the challenges faced in building machine learning models on large enterprise systems, and then addresses how H2O helps you to overcome them. You’ll start by exploring H2O’s in-memory distributed architecture and find out how it enables you to build highly accurate and explainable models on massive datasets using your favorite ML algorithms, language, and IDE. You’ll also get to grips with the seamless integration of H2O model building and deployment with Spark using H2O Sparkling Water. You’ll then learn how to easily deploy models with H2O MOJO. Next, the book shows you how H2O Enterprise Steam handles admin configurations and user management, and then helps you to identify different stakeholder perspectives that a data scientist must understand in order to succeed in an enterprise setting. Finally, you’ll be introduced to the H2O AI Cloud platform and explore the entire machine learning life cycle using multiple advanced AI capabilities. By the end of this book, you’ll be able to build and deploy advanced, state-of-the-art machine learning models for your business needs.
Table of Contents (22 chapters)
1
Section 1 – Introduction to the H2O Machine Learning Platform for Data at Scale
5
Section 2 – Building State-of-the-Art Models on Large Data Volumes Using H2O
11
Section 3 – Deploying Your Models to Production Environments
14
Section 4 – Enterprise Stakeholder Perspectives
17
Section 5 – Broadening the View – Data to AI Applications with the H2O AI Cloud Platform

Explaining models built in H2O

Model performance metrics measured on our test data can tell us how well a model predicts and how fast it predicts. As mentioned in the chapter introduction, knowing that a model predicts well is not a sufficient reason to put it into production. Performance metrics alone cannot provide any insight into why the model is predicting as it is. If we don't understand why the model is predicting well, we have little hope of being able to anticipate conditions that would make the model not work well. The ability to explain a model's reasoning is a critical step prior to promoting it into production. This process can be described as gaining trust in the model.

Explainability is typically divided into global and local components. Global explainability describes how the model works for an entire population. Gaining trust in a model is primarily a function of determining how it works globally. Local explanations operate instead on individual rows...