Book Image

Machine Learning at Scale with H2O

By : Gregory Keys, David Whiting
Book Image

Machine Learning at Scale with H2O

By: Gregory Keys, David Whiting

Overview of this book

H2O is an open source, fast, and scalable machine learning framework that allows you to build models using big data and then easily productionalize them in diverse enterprise environments. Machine Learning at Scale with H2O begins with an overview of the challenges faced in building machine learning models on large enterprise systems, and then addresses how H2O helps you to overcome them. You’ll start by exploring H2O’s in-memory distributed architecture and find out how it enables you to build highly accurate and explainable models on massive datasets using your favorite ML algorithms, language, and IDE. You’ll also get to grips with the seamless integration of H2O model building and deployment with Spark using H2O Sparkling Water. You’ll then learn how to easily deploy models with H2O MOJO. Next, the book shows you how H2O Enterprise Steam handles admin configurations and user management, and then helps you to identify different stakeholder perspectives that a data scientist must understand in order to succeed in an enterprise setting. Finally, you’ll be introduced to the H2O AI Cloud platform and explore the entire machine learning life cycle using multiple advanced AI capabilities. By the end of this book, you’ll be able to build and deploy advanced, state-of-the-art machine learning models for your business needs.
Table of Contents (22 chapters)
1
Section 1 – Introduction to the H2O Machine Learning Platform for Data at Scale
5
Section 2 – Building State-of-the-Art Models on Large Data Volumes Using H2O
11
Section 3 – Deploying Your Models to Production Environments
14
Section 4 – Enterprise Stakeholder Perspectives
17
Section 5 – Broadening the View – Data to AI Applications with the H2O AI Cloud Platform

Surveying a sample of MOJO deployment patterns

The purpose of this chapter is to overview the diverse ways in which MOJOs can be deployed for making predictions. Enough detail is given to provide an understanding of the context of MOJO deployment and scoring. Links are provided to find low-level details.

First, let's summarize our sample of MOJO scoring patterns in table form to get a sense of the many different ways you can deploy MOJOs. After this sample overview, we will elaborate on each table entry more fully.

Note that the table columns for our deployment-pattern summaries are represented as follows:

  • Data Velocity: This refers to the size and speed of data that is scored and is categorized as either real-time (single record scored, typically in less than 100 milliseconds), batch (large numbers of records scored at one time), and streaming (a continuous flow of records that are scored).
  • Scoring Communication: This refers to how the scoring is triggered...