Book Image

Machine Learning at Scale with H2O

By : Gregory Keys, David Whiting
Book Image

Machine Learning at Scale with H2O

By: Gregory Keys, David Whiting

Overview of this book

H2O is an open source, fast, and scalable machine learning framework that allows you to build models using big data and then easily productionalize them in diverse enterprise environments. Machine Learning at Scale with H2O begins with an overview of the challenges faced in building machine learning models on large enterprise systems, and then addresses how H2O helps you to overcome them. You’ll start by exploring H2O’s in-memory distributed architecture and find out how it enables you to build highly accurate and explainable models on massive datasets using your favorite ML algorithms, language, and IDE. You’ll also get to grips with the seamless integration of H2O model building and deployment with Spark using H2O Sparkling Water. You’ll then learn how to easily deploy models with H2O MOJO. Next, the book shows you how H2O Enterprise Steam handles admin configurations and user management, and then helps you to identify different stakeholder perspectives that a data scientist must understand in order to succeed in an enterprise setting. Finally, you’ll be introduced to the H2O AI Cloud platform and explore the entire machine learning life cycle using multiple advanced AI capabilities. By the end of this book, you’ll be able to build and deploy advanced, state-of-the-art machine learning models for your business needs.
Table of Contents (22 chapters)
1
Section 1 – Introduction to the H2O Machine Learning Platform for Data at Scale
5
Section 2 – Building State-of-the-Art Models on Large Data Volumes Using H2O
11
Section 3 – Deploying Your Models to Production Environments
14
Section 4 – Enterprise Stakeholder Perspectives
17
Section 5 – Broadening the View – Data to AI Applications with the H2O AI Cloud Platform

Summary

In this chapter, we conducted a wide survey of H2O capabilities for model building at scale. We learned about the data sources we can ingest into our H2O clusters and the file formats that are supported. We learned how this data moves from the source to the H2O cluster, and how the H2OFrame API provides a single handle in the IDE to represent the distributed in-memory data on the H2O cluster as a single two-dimensional data structure. We then learned the many ways in which we can manipulate data through the H2OFrame API and how to export it to external systems if need be.

We then surveyed the core of H2O model building at scale – H2O's many state-of-the-art distributed unsupervised and supervised learning algorithms. Then, we put those into context by surveying model capabilities around them, from training, evaluating, and explaining the models, to using model artifacts to retrain, score and inspect models.

With this map of the landscape firmly in hand, we...