Book Image

Engineering MLOps

By : Emmanuel Raj
Book Image

Engineering MLOps

By: Emmanuel Raj

Overview of this book

Engineering MLps presents comprehensive insights into MLOps coupled with real-world examples in Azure to help you to write programs, train robust and scalable ML models, and build ML pipelines to train and deploy models securely in production. The book begins by familiarizing you with the MLOps workflow so you can start writing programs to train ML models. Then you’ll then move on to explore options for serializing and packaging ML models post-training to deploy them to facilitate machine learning inference, model interoperability, and end-to-end model traceability. You’ll learn how to build ML pipelines, continuous integration and continuous delivery (CI/CD) pipelines, and monitor pipelines to systematically build, deploy, monitor, and govern ML solutions for businesses and industries. Finally, you’ll apply the knowledge you’ve gained to build real-world projects. By the end of this ML book, you'll have a 360-degree view of MLOps and be ready to implement MLOps in your organization.
Table of Contents (18 chapters)
1
Section 1: Framework for Building Machine Learning Models
7
Section 2: Deploying Machine Learning Models at Scale
13
Section 3: Monitoring Machine Learning Models in Production

Model evaluation and interpretability metrics

Acquiring data and training ML models is a good start toward creating business value. After training models, it is vital to measure the models' performance and understand why and how a model is predicting or performing in a certain way. Hence, model evaluation and interpretability are essential parts of the MLOps workflow. They enable us to understand and validate the ML models to determine the business value they will produce. As there are several types of ML models, there are numerous evaluation techniques as well.

Looking back at Chapter 2, Characterizing Your Machine Learning Problem, where we studied various types of models categorized as learning models, hybrid models, statistical models, and HITL (Human-in-the-loop) models, we will now discuss different metrics to evaluate these models. Here are some of the key model evaluation and interpretability techniques as shown in Figure 5.1. These have become standard in research...