Book Image

Azure Machine Learning Engineering

By : Sina Fakhraee, Balamurugan Balakreshnan, Megan Masanz
Book Image

Azure Machine Learning Engineering

By: Sina Fakhraee, Balamurugan Balakreshnan, Megan Masanz

Overview of this book

Data scientists working on productionizing machine learning (ML) workloads face a breadth of challenges at every step owing to the countless factors involved in getting ML models deployed and running. This book offers solutions to common issues, detailed explanations of essential concepts, and step-by-step instructions to productionize ML workloads using the Azure Machine Learning service. You’ll see how data scientists and ML engineers working with Microsoft Azure can train and deploy ML models at scale by putting their knowledge to work with this practical guide. Throughout the book, you’ll learn how to train, register, and productionize ML models by making use of the power of the Azure Machine Learning service. You’ll get to grips with scoring models in real time and batch, explaining models to earn business trust, mitigating model bias, and developing solutions using an MLOps framework. By the end of this Azure Machine Learning book, you’ll be ready to build and deploy end-to-end ML solutions into a production system using the Azure Machine Learning service for real-time scenarios.
Table of Contents (17 chapters)
1
Part 1: Training and Tuning Models with the Azure Machine Learning Service
7
Part 2: Deploying and Explaining Models in AMLS
12
Part 3: Productionizing Your Workload with MLOps

Deploying ML Models for Batch Scoring

Deploying ML models for batch scoring supports making predictions using a large volume of data. This solution supports use cases when you don’t need your model predictions immediately, but rather minutes or hours later. If you need to provide inferencing once a day, week, or month, using a large dataset, batch inferencing is ideal.

Batch inferencing allows data scientists and ML professionals to leverage cloud compute when needed, rather than paying for compute resources to be available for real-time responses. This means that compute resources can be spun up to support batch inferencing and spun down after the results have been provided to the business users. We are going to show you how to leverage the Azure Machine Learning service to deploy trained models to managed endpoints, which are HTTPS REST APIs that clients can invoke to get the score results of a trained model for batch inferencing using the studio and the Python SDK.

...