Book Image

Practical Deep Learning at Scale with MLflow

By : Yong Liu
5 (1)
Book Image

Practical Deep Learning at Scale with MLflow

5 (1)
By: Yong Liu

Overview of this book

The book starts with an overview of the deep learning (DL) life cycle and the emerging Machine Learning Ops (MLOps) field, providing a clear picture of the four pillars of deep learning: data, model, code, and explainability and the role of MLflow in these areas. From there onward, it guides you step by step in understanding the concept of MLflow experiments and usage patterns, using MLflow as a unified framework to track DL data, code and pipelines, models, parameters, and metrics at scale. You’ll also tackle running DL pipelines in a distributed execution environment with reproducibility and provenance tracking, and tuning DL models through hyperparameter optimization (HPO) with Ray Tune, Optuna, and HyperBand. As you progress, you’ll learn how to build a multi-step DL inference pipeline with preprocessing and postprocessing steps, deploy a DL inference pipeline for production using Ray Serve and AWS SageMaker, and finally create a DL explanation as a service (EaaS) using the popular Shapley Additive Explanations (SHAP) toolbox. By the end of this book, you’ll have built the foundation and gained the hands-on experience you need to develop a DL pipeline solution from initial offline experimentation to final deployment and production, all within a reproducible and open source framework.
Table of Contents (17 chapters)
1
Section 1 - Deep Learning Challenges and MLflow Prime
4
Section 2 –
Tracking a Deep Learning Pipeline at Scale
7
Section 3 –
Running Deep Learning Pipelines at Scale
10
Section 4 –
Deploying a Deep Learning Pipeline at Scale
13
Section 5 – Deep Learning Model Explainability at Scale

Chapter 7: Multi-Step Deep Learning Inference Pipeline

Now that we have successfully run HPO (Hyperparameter Optimization) and produced a well-tuned DL model that meets the business requirements, it is time to move to the next step towards using this model for prediction. This is where the model inference pipeline comes into play, where the model is used for predicting or scoring real-world data in production, either in real time or batch mode. However, an inference pipeline usually does not just rely on a single model but needs preprocessing and postprocessing logic that is not necessarily seen during the model development stage. Examples of preprocessing steps include detecting the language locale (English or some other languages) before passing the input data to the model for scoring. Postprocessing could include enriching the predicted labels with additional metadata to meet the business application's requirements. There are also patterns of ML/DL inference pipelines that...