Book Image

Machine Learning on Kubernetes

By : Faisal Masood, Ross Brigoli
Book Image

Machine Learning on Kubernetes

By: Faisal Masood, Ross Brigoli

Overview of this book

MLOps is an emerging field that aims to bring repeatability, automation, and standardization of the software engineering domain to data science and machine learning engineering. By implementing MLOps with Kubernetes, data scientists, IT professionals, and data engineers can collaborate and build machine learning solutions that deliver business value for their organization. You'll begin by understanding the different components of a machine learning project. Then, you'll design and build a practical end-to-end machine learning project using open source software. As you progress, you'll understand the basics of MLOps and the value it can bring to machine learning projects. You will also gain experience in building, configuring, and using an open source, containerized machine learning platform. In later chapters, you will prepare data, build and deploy machine learning models, and automate workflow tasks using the same platform. Finally, the exercises in this book will help you get hands-on experience in Kubernetes and open source tools, such as JupyterHub, MLflow, and Airflow. By the end of this book, you'll have learned how to effectively build, train, and deploy a machine learning model using the machine learning platform you built.
Table of Contents (16 chapters)
1
Part 1: The Challenges of Adopting ML and Understanding MLOps (What and Why)
5
Part 2: The Building Blocks of an MLOps Platform and How to Build One on Kubernetes
10
Part 3: How to Use the MLOps Platform and Build a Full End-to-End Project Using the New Platform

Understanding model inferencing with Seldon Core

In the previous chapter, you built the model. These models are built by data science teams to be used in production and serve the prediction requests. There are many ways to use a model in production, such as embedding the model with your customer-facing program, but the most common way is to expose the model as a REST API. The REST API can then be used by any application. In general, running and serving a model in production is called model serving.

However, once the model is in production, it needs to be monitored for performance and needs updating to meet the expected criteria. A hosted model solution enables you to not only serve the model but monitor its performance and generate alerts that can be used to trigger retraining of the model.

Seldon is a UK-based firm that created a set of tools to manage the model's life cycle. Seldon Core is an open source framework that helps expose ML models to be consumed as REST APIs...