Book Image

Practical Deep Learning at Scale with MLflow

By : Yong Liu
5 (1)
Book Image

Practical Deep Learning at Scale with MLflow

5 (1)
By: Yong Liu

Overview of this book

The book starts with an overview of the deep learning (DL) life cycle and the emerging Machine Learning Ops (MLOps) field, providing a clear picture of the four pillars of deep learning: data, model, code, and explainability and the role of MLflow in these areas. From there onward, it guides you step by step in understanding the concept of MLflow experiments and usage patterns, using MLflow as a unified framework to track DL data, code and pipelines, models, parameters, and metrics at scale. You’ll also tackle running DL pipelines in a distributed execution environment with reproducibility and provenance tracking, and tuning DL models through hyperparameter optimization (HPO) with Ray Tune, Optuna, and HyperBand. As you progress, you’ll learn how to build a multi-step DL inference pipeline with preprocessing and postprocessing steps, deploy a DL inference pipeline for production using Ray Serve and AWS SageMaker, and finally create a DL explanation as a service (EaaS) using the popular Shapley Additive Explanations (SHAP) toolbox. By the end of this book, you’ll have built the foundation and gained the hands-on experience you need to develop a DL pipeline solution from initial offline experimentation to final deployment and production, all within a reproducible and open source framework.
Table of Contents (17 chapters)
1
Section 1 - Deep Learning Challenges and MLflow Prime
4
Section 2 –
Tracking a Deep Learning Pipeline at Scale
7
Section 3 –
Running Deep Learning Pipelines at Scale
10
Section 4 –
Deploying a Deep Learning Pipeline at Scale
13
Section 5 – Deep Learning Model Explainability at Scale

Tracking model parameters

As we have already seen, there are lots of benefits of using auto-logging in MLflow, but if we want to track additional model parameters, we can either use MLflow to log additional parameters on top of what auto-logging records, or directly use MLflow to log all the parameters we want without using auto-logging at all.

Let's walk through a notebook without using MLflow auto-logging. If we want to have full control of what parameters will be logged by MLflow, we can use two APIs: mlflow.log_param and mlflow.log_params. The first one logs a single pair of key-value parameters, while the second logs an entire dictionary of key-value parameters. So, what kind of parameters might we be interested in tracking? The following answers this:

  • Model hyperparameters: Hyperparameters are defined before the learning process begins, which means they control how the learning process learns. These parameters can be turned and can directly affect how well...