Book Image

Practical Deep Learning at Scale with MLflow

By : Yong Liu
5 (1)
Book Image

Practical Deep Learning at Scale with MLflow

5 (1)
By: Yong Liu

Overview of this book

The book starts with an overview of the deep learning (DL) life cycle and the emerging Machine Learning Ops (MLOps) field, providing a clear picture of the four pillars of deep learning: data, model, code, and explainability and the role of MLflow in these areas. From there onward, it guides you step by step in understanding the concept of MLflow experiments and usage patterns, using MLflow as a unified framework to track DL data, code and pipelines, models, parameters, and metrics at scale. You’ll also tackle running DL pipelines in a distributed execution environment with reproducibility and provenance tracking, and tuning DL models through hyperparameter optimization (HPO) with Ray Tune, Optuna, and HyperBand. As you progress, you’ll learn how to build a multi-step DL inference pipeline with preprocessing and postprocessing steps, deploy a DL inference pipeline for production using Ray Serve and AWS SageMaker, and finally create a DL explanation as a service (EaaS) using the popular Shapley Additive Explanations (SHAP) toolbox. By the end of this book, you’ll have built the foundation and gained the hands-on experience you need to develop a DL pipeline solution from initial offline experimentation to final deployment and production, all within a reproducible and open source framework.
Table of Contents (17 chapters)
1
Section 1 - Deep Learning Challenges and MLflow Prime
4
Section 2 –
Tracking a Deep Learning Pipeline at Scale
7
Section 3 –
Running Deep Learning Pipelines at Scale
10
Section 4 –
Deploying a Deep Learning Pipeline at Scale
13
Section 5 – Deep Learning Model Explainability at Scale

Running HPO with Ray Tune using Optuna and HyperBand

Now, let's do some experiments with different search algorithms and schedulers. Given that Optuna is such a great TPE-based search algorithm, and ASHA is a great scheduler that does asynchronous parallel trials with early termination of the unpromising ones, it would be interesting to see how many changes we need to do to make this work.

It turns out the change is very minimal based on what we have already done in the previous section. Here, we will illustrate the four main changes:

  1. Install the Optuna package. This can be done by running the following command:
    pip install optuna==2.10.0

This will install Optuna in the same virtual environment that we had before. If you have already run pip install -r requirements.text, then Optuna has already been installed and you can skip this step.

  1. Import the relevant Ray Tune modules that integrate with Optuna and the ASHA scheduler (here, we use the HyperBand implementation...