Book Image

Azure Machine Learning Engineering

By : Sina Fakhraee, Balamurugan Balakreshnan, Megan Masanz
Book Image

Azure Machine Learning Engineering

By: Sina Fakhraee, Balamurugan Balakreshnan, Megan Masanz

Overview of this book

Data scientists working on productionizing machine learning (ML) workloads face a breadth of challenges at every step owing to the countless factors involved in getting ML models deployed and running. This book offers solutions to common issues, detailed explanations of essential concepts, and step-by-step instructions to productionize ML workloads using the Azure Machine Learning service. You’ll see how data scientists and ML engineers working with Microsoft Azure can train and deploy ML models at scale by putting their knowledge to work with this practical guide. Throughout the book, you’ll learn how to train, register, and productionize ML models by making use of the power of the Azure Machine Learning service. You’ll get to grips with scoring models in real time and batch, explaining models to earn business trust, mitigating model bias, and developing solutions using an MLOps framework. By the end of this Azure Machine Learning book, you’ll be ready to build and deploy end-to-end ML solutions into a production system using the Azure Machine Learning service for real-time scenarios.
Table of Contents (17 chapters)
1
Part 1: Training and Tuning Models with the Azure Machine Learning Service
7
Part 2: Deploying and Explaining Models in AMLS
12
Part 3: Productionizing Your Workload with MLOps

Understanding the MLOps implementation

As mentioned, MLOps is a concept, not an implementation. We will provide an implementation of MLOps as the foundation for this chapter. We will establish an Azure DevOps pipeline to orchestrate an AML pipeline for transforming data in the dev environment, create a model leveraging MLflow, and evaluate whether the model is performing better or equal to the existing model. Following this pipeline, if a new model is registered, we will deploy this new model in the dev environment leveraging blue/green deployments. Blue/green deployments enable high availability. As a new model is deployed, the existing endpoint deployment is available during the new model deployment. After the new model is deployed to the managed online endpoint, we swap traffic over to the new model. After the model is deployed in the dev environment, we will trigger an approval process to then register and deploy the new model into the qa environment, which will leverage blue/green...