Book Image

Mastering Azure Machine Learning

By : Christoph Körner, Kaijisse Waaijer
Book Image

Mastering Azure Machine Learning

By: Christoph Körner, Kaijisse Waaijer

Overview of this book

The increase being seen in data volume today requires distributed systems, powerful algorithms, and scalable cloud infrastructure to compute insights and train and deploy machine learning (ML) models. This book will help you improve your knowledge of building ML models using Azure and end-to-end ML pipelines on the cloud. The book starts with an overview of an end-to-end ML project and a guide on how to choose the right Azure service for different ML tasks. It then focuses on Azure Machine Learning and takes you through the process of data experimentation, data preparation, and feature engineering using Azure Machine Learning and Python. You'll learn advanced feature extraction techniques using natural language processing (NLP), classical ML techniques, and the secrets of both a great recommendation engine and a performant computer vision model using deep learning methods. You'll also explore how to train, optimize, and tune models using Azure Automated Machine Learning and HyperDrive, and perform distributed training on Azure. Then, you'll learn different deployment and monitoring techniques using Azure Kubernetes Services with Azure Machine Learning, along with the basics of MLOps—DevOps for ML to automate your ML process as CI/CD pipeline. By the end of this book, you'll have mastered Azure Machine Learning and be able to confidently design, build and operate scalable ML pipelines in Azure.
Table of Contents (20 chapters)
1
Section 1: Azure Machine Learning
4
Section 2: Experimentation and Data Preparation
9
Section 3: Training Machine Learning Models
15
Section 4: Optimization and Deployment of Machine Learning Models
19
Index

Summary

In this chapter, you have learned how to use and configure Azure Machine Learning pipelines to split an ML workflow into multiple steps, and how to use pipelines and pipeline steps for estimators, Python execution, and parallel execution. You configured pipeline inputs and outputs using Dataset and PipelineData and managed to control the execution flow of a pipeline.

As another milestone, you deployed the pipeline as a PublishedPipeline instance to an HTTP endpoint. This lets you configure and trigger pipeline execution with a simple HTTP call. After that, you implemented automatic scheduling based on time frequency, and you used reactive scheduling based on changes in the underlying dataset. Now the pipeline can rerun your workflow when the input data changes without any manual interaction.

Finally, we also modularized and versioned a pipeline step, so it can be reused in other projects. We used InputPortDef and OutputPortDef to create virtual bindings for data sources...