Book Image

Engineering MLOps

By : Emmanuel Raj
Book Image

Engineering MLOps

By: Emmanuel Raj

Overview of this book

Engineering MLps presents comprehensive insights into MLOps coupled with real-world examples in Azure to help you to write programs, train robust and scalable ML models, and build ML pipelines to train and deploy models securely in production. The book begins by familiarizing you with the MLOps workflow so you can start writing programs to train ML models. Then you’ll then move on to explore options for serializing and packaging ML models post-training to deploy them to facilitate machine learning inference, model interoperability, and end-to-end model traceability. You’ll learn how to build ML pipelines, continuous integration and continuous delivery (CI/CD) pipelines, and monitor pipelines to systematically build, deploy, monitor, and govern ML solutions for businesses and industries. Finally, you’ll apply the knowledge you’ve gained to build real-world projects. By the end of this ML book, you'll have a 360-degree view of MLOps and be ready to implement MLOps in your organization.
Table of Contents (18 chapters)
1
Section 1: Framework for Building Machine Learning Models
7
Section 2: Deploying Machine Learning Models at Scale
13
Section 3: Monitoring Machine Learning Models in Production

Understanding the types of ML inference in production

In the previous section, we saw the priorities of ML in research and production. To serve the business needs in production, ML models are inferred using various deployment targets, depending on the need. Predicting or making a decision using an ML model is called ML model inference. Let's explore ways of deploying ML models on different deployment targets to facilitate ML inference as per the business needs.

Deployment targets

In this section, we will look at different types of deployment targets and why and how we serve ML models for inference in these deployment targets. Let's start by looking at a virtual machine or an on-premises server.

Virtual machines

Virtual machines can be on the cloud or on-premises, depending on the IT setup of a business or an organization. Serving ML models on virtual machines is quite common. ML models are served on virtual machines in the form of web services. The web service...