Book Image

Mastering Azure Machine Learning. - Second Edition

By : Christoph Körner, Marcel Alsdorf
Book Image

Mastering Azure Machine Learning. - Second Edition

By: Christoph Körner, Marcel Alsdorf

Overview of this book

Azure Machine Learning is a cloud service for accelerating and managing the machine learning (ML) project life cycle that ML professionals, data scientists, and engineers can use in their day-to-day workflows. This book covers the end-to-end ML process using Microsoft Azure Machine Learning, including data preparation, performing and logging ML training runs, designing training and deployment pipelines, and managing these pipelines via MLOps. The first section shows you how to set up an Azure Machine Learning workspace; ingest and version datasets; as well as preprocess, label, and enrich these datasets for training. In the next two sections, you'll discover how to enrich and train ML models for embedding, classification, and regression. You'll explore advanced NLP techniques, traditional ML models such as boosted trees, modern deep neural networks, recommendation systems, reinforcement learning, and complex distributed ML training techniques - all using Azure Machine Learning. The last section will teach you how to deploy the trained models as a batch pipeline or real-time scoring service using Docker, Azure Machine Learning clusters, Azure Kubernetes Services, and alternative deployment targets. By the end of this book, you’ll be able to combine all the steps you’ve learned by building an MLOps pipeline.
Table of Contents (23 chapters)
1
Section 1: Introduction to Azure Machine Learning
5
Section 2: Data Ingestion, Preparation, Feature Engineering, and Pipelining
11
Section 3: The Training and Optimization of Machine Learning Models
17
Section 4: Machine Learning Model Deployment and Operations

Hardware optimization with FPGAs

In the previous section, we exported a model to ONNX to take advantage of an inference-optimized and hardware-accelerated runtime to improve the scoring performance. In this section, we will take this approach one step further to deploy on even faster inferencing hardware: FPGAs.

But, before we talk about how to deploy a model to an FPGA, let's first understand what an FPGA is and why we would choose one as a target for DL inference instead of a GPU.

Understanding FPGAs

Most people typically come across a specific variety of integrated circuit (IC), called an application-specific integrated circuit (ASIC). ASICs are purpose-built ICs, such as the processor in your laptop, the GPU cores on your graphics card, or the microcontroller in your washing machine. These chips share the fact that they have a fixed hardware footprint optimized to support a specific task. Often, like any general processor, they operate with a specific instruction...