Book Image

Mastering Azure Machine Learning. - Second Edition

By : Christoph Körner, Marcel Alsdorf
Book Image

Mastering Azure Machine Learning. - Second Edition

By: Christoph Körner, Marcel Alsdorf

Overview of this book

Azure Machine Learning is a cloud service for accelerating and managing the machine learning (ML) project life cycle that ML professionals, data scientists, and engineers can use in their day-to-day workflows. This book covers the end-to-end ML process using Microsoft Azure Machine Learning, including data preparation, performing and logging ML training runs, designing training and deployment pipelines, and managing these pipelines via MLOps. The first section shows you how to set up an Azure Machine Learning workspace; ingest and version datasets; as well as preprocess, label, and enrich these datasets for training. In the next two sections, you'll discover how to enrich and train ML models for embedding, classification, and regression. You'll explore advanced NLP techniques, traditional ML models such as boosted trees, modern deep neural networks, recommendation systems, reinforcement learning, and complex distributed ML training techniques - all using Azure Machine Learning. The last section will teach you how to deploy the trained models as a batch pipeline or real-time scoring service using Docker, Azure Machine Learning clusters, Azure Kubernetes Services, and alternative deployment targets. By the end of this book, you’ll be able to combine all the steps you’ve learned by building an MLOps pipeline.
Table of Contents (23 chapters)
1
Section 1: Introduction to Azure Machine Learning
5
Section 2: Data Ingestion, Preparation, Feature Engineering, and Pipelining
11
Section 3: The Training and Optimization of Machine Learning Models
17
Section 4: Machine Learning Model Deployment and Operations

Summary

In this chapter, you learned how to preprocess textual and categorical nominal and ordinal data using state-of-the-art NLP techniques.

You can now build a classical NLP pipeline with stop word removal, lemmatization and stemming, n-grams, and count term occurrences using a bag-of-words model. We used SVD to reduce the dimensionality of the resulting feature vector and to generate lower-dimensional topic encoding. One important tweak to the count-based bag-of-words model is to compare the relative term frequencies of a document. You learned about the TF-IDF function and can use it to compute the importance of a word in a document compared to the corpus.

In the following section, we looked at Word2Vec and GloVe, which are pretrained dictionaries of numeric word embeddings. Now you can easily reuse a pretrained word embedding for commercial NLP applications with great improvements and accuracy due to the semantic embedding of words.

Finally, we finished the chapter by...