Book Image

Machine Learning on Kubernetes

By : Faisal Masood, Ross Brigoli
Book Image

Machine Learning on Kubernetes

By: Faisal Masood, Ross Brigoli

Overview of this book

MLOps is an emerging field that aims to bring repeatability, automation, and standardization of the software engineering domain to data science and machine learning engineering. By implementing MLOps with Kubernetes, data scientists, IT professionals, and data engineers can collaborate and build machine learning solutions that deliver business value for their organization. You'll begin by understanding the different components of a machine learning project. Then, you'll design and build a practical end-to-end machine learning project using open source software. As you progress, you'll understand the basics of MLOps and the value it can bring to machine learning projects. You will also gain experience in building, configuring, and using an open source, containerized machine learning platform. In later chapters, you will prepare data, build and deploy machine learning models, and automate workflow tasks using the same platform. Finally, the exercises in this book will help you get hands-on experience in Kubernetes and open source tools, such as JupyterHub, MLflow, and Airflow. By the end of this book, you'll have learned how to effectively build, train, and deploy a machine learning model using the machine learning platform you built.
Table of Contents (16 chapters)
1
Part 1: The Challenges of Adopting ML and Understanding MLOps (What and Why)
5
Part 2: The Building Blocks of an MLOps Platform and How to Build One on Kubernetes
10
Part 3: How to Use the MLOps Platform and Build a Full End-to-End Project Using the New Platform

Understanding the basics of Apache Spark

Apache Spark is an open source data processing engine designed for distributed large-scale processing of data. This means that if you have smaller datasets, say 10s or even a few 100s of GB, a tuned traditional database may provide faster processing times. The main differentiator for Apache Spark is its capability to perform in-memory intermediate computations, which makes Apache Spark much faster than Hadoop MapReduce.

Apache Spark is built for speed, flexibility, and ease of use. Apache Spark offers more than 70 high-level data processing operators that make it easy for data engineers to build data applications, so it is easy to write data processing logic using Apache Spark APIs. Being flexible means that Spark works as a unified data processing engine and works on several types of data workloads such as batch applications, streaming applications, interactive queries, and even ML algorithms.

Figure 5.26 shows the Apache Spark components...