Book Image

Azure Data Scientist Associate Certification Guide

By : Andreas Botsikas, Michael Hlobil
Book Image

Azure Data Scientist Associate Certification Guide

By: Andreas Botsikas, Michael Hlobil

Overview of this book

The Azure Data Scientist Associate Certification Guide helps you acquire practical knowledge for machine learning experimentation on Azure. It covers everything you need to pass the DP-100 exam and become a certified Azure Data Scientist Associate. Starting with an introduction to data science, you'll learn the terminology that will be used throughout the book and then move on to the Azure Machine Learning (Azure ML) workspace. You'll discover the studio interface and manage various components, such as data stores and compute clusters. Next, the book focuses on no-code and low-code experimentation, and shows you how to use the Automated ML wizard to locate and deploy optimal models for your dataset. You'll also learn how to run end-to-end data science experiments using the designer provided in Azure ML Studio. You'll then explore the Azure ML Software Development Kit (SDK) for Python and advance to creating experiments and publishing models using code. The book also guides you in optimizing your model's hyperparameters using Hyperdrive before demonstrating how to use responsible AI tools to interpret and debug your models. Once you have a trained model, you'll learn to operationalize it for batch or real-time inferences and monitor it in production. By the end of this Azure certification study guide, you'll have gained the knowledge and the practical skills required to pass the DP-100 exam.
Table of Contents (17 chapters)
1
Section 1: Starting your cloud-based data science journey
6
Section 2: No code data science experimentation
9
Section 3: Advanced data science tooling and capabilities

Chapter 12: Operationalizing Models with Code

In this chapter, you are going to learn how to operationalize the machine learning models you have been training, in this book, so far. You will explore two approaches: exposing a real-time endpoint by hosting a REST API that you can use to make inferences and expanding your pipeline authoring knowledge to make inferences on top of big data, in parallel, efficiently. You will begin by registering a model in the workspace to keep track of the artifact. Then, you will publish a REST API; this is something that will allow your model to integrate with third-party applications such as Power BI. Following this, you will author a pipeline to process half a million records within a couple of minutes in a very cost-effective manner.

In this chapter, we are going to cover the following topics:

  • Understanding the various deployment options
  • Registering models in the workspace
  • Deploying real-time endpoints
  • Creating a batch inference...