Book Image

Azure Data Scientist Associate Certification Guide

By : Andreas Botsikas, Michael Hlobil
Book Image

Azure Data Scientist Associate Certification Guide

By: Andreas Botsikas, Michael Hlobil

Overview of this book

The Azure Data Scientist Associate Certification Guide helps you acquire practical knowledge for machine learning experimentation on Azure. It covers everything you need to pass the DP-100 exam and become a certified Azure Data Scientist Associate. Starting with an introduction to data science, you'll learn the terminology that will be used throughout the book and then move on to the Azure Machine Learning (Azure ML) workspace. You'll discover the studio interface and manage various components, such as data stores and compute clusters. Next, the book focuses on no-code and low-code experimentation, and shows you how to use the Automated ML wizard to locate and deploy optimal models for your dataset. You'll also learn how to run end-to-end data science experiments using the designer provided in Azure ML Studio. You'll then explore the Azure ML Software Development Kit (SDK) for Python and advance to creating experiments and publishing models using code. The book also guides you in optimizing your model's hyperparameters using Hyperdrive before demonstrating how to use responsible AI tools to interpret and debug your models. Once you have a trained model, you'll learn to operationalize it for batch or real-time inferences and monitor it in production. By the end of this Azure certification study guide, you'll have gained the knowledge and the practical skills required to pass the DP-100 exam.
Table of Contents (17 chapters)
1
Section 1: Starting your cloud-based data science journey
6
Section 2: No code data science experimentation
9
Section 3: Advanced data science tooling and capabilities

Authoring a pipeline

Let's assume that you need to create a repeatable workflow that has two steps:

  1. It loads the data from a registered dataset and splits it into training and test datasets. These datasets are converted into a special construct needed by the LightGBM tree-based algorithm. The converted constructs are stored to be used by the next step. In our case, you will use the loans dataset that you registered in Chapter 10, Understanding Model Results. You will be writing the code for this step within a folder named step01.
  2. It loads the pre-processed data and trains a LightGBM model that is then stored in the /models/loans/ folder of the default datastore attached to the AzureML workspace. You will be writing the code for this step within a folder named step02.

    Each step will be a separate Python file, taking some arguments to specify where to read the data from and where to write the data to. These scripts will utilize the same mechanics as the scripts you authored...