Book Image

Mastering Azure Machine Learning

By : Christoph Körner, Kaijisse Waaijer
Book Image

Mastering Azure Machine Learning

By: Christoph Körner, Kaijisse Waaijer

Overview of this book

The increase being seen in data volume today requires distributed systems, powerful algorithms, and scalable cloud infrastructure to compute insights and train and deploy machine learning (ML) models. This book will help you improve your knowledge of building ML models using Azure and end-to-end ML pipelines on the cloud. The book starts with an overview of an end-to-end ML project and a guide on how to choose the right Azure service for different ML tasks. It then focuses on Azure Machine Learning and takes you through the process of data experimentation, data preparation, and feature engineering using Azure Machine Learning and Python. You'll learn advanced feature extraction techniques using natural language processing (NLP), classical ML techniques, and the secrets of both a great recommendation engine and a performant computer vision model using deep learning methods. You'll also explore how to train, optimize, and tune models using Azure Automated Machine Learning and HyperDrive, and perform distributed training on Azure. Then, you'll learn different deployment and monitoring techniques using Azure Kubernetes Services with Azure Machine Learning, along with the basics of MLOps—DevOps for ML to automate your ML process as CI/CD pipeline. By the end of this book, you'll have mastered Azure Machine Learning and be able to confidently design, build and operate scalable ML pipelines in Azure.
Table of Contents (20 chapters)
1
Section 1: Azure Machine Learning
4
Section 2: Experimentation and Data Preparation
9
Section 3: Training Machine Learning Models
15
Section 4: Optimization and Deployment of Machine Learning Models
19
Index

Summary

In this chapter, we introduced hyperparameter tuning (through HyperDrive) and Azure Automated Machine Learning. We observe that both techniques can help you to efficiently retrieve the best model for your ML task.

Grid sampling works great with classical ML models, and also when the number of tunable parameters is fixed. All values on a discrete parameter grid are evaluated. In random sampling, we can apply a continuous distribution for the parameter space and select as many parameter choices as we can fit into the configured training duration. Random sampling performs better on a large number of parameters. Both sampling techniques can/should be tuned using an early stopping criterion.

Unlike random and grid sampling, Bayesian optimization probes the model performance in order to optimize the following parameter choices. This means that each set of parameter choices and the resulting model performance are used to compute the next best parameter choices. Therefore, Bayesian...