Book Image

Automated Machine Learning

By : Adnan Masood
Book Image

Automated Machine Learning

By: Adnan Masood

Overview of this book

Every machine learning engineer deals with systems that have hyperparameters, and the most basic task in automated machine learning (AutoML) is to automatically set these hyperparameters to optimize performance. The latest deep neural networks have a wide range of hyperparameters for their architecture, regularization, and optimization, which can be customized effectively to save time and effort. This book reviews the underlying techniques of automated feature engineering, model and hyperparameter tuning, gradient-based approaches, and much more. You'll discover different ways of implementing these techniques in open source tools and then learn to use enterprise tools for implementing AutoML in three major cloud service providers: Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform. As you progress, you’ll explore the features of cloud AutoML platforms by building machine learning models using AutoML. The book will also show you how to develop accurate models by automating time-consuming and repetitive tasks in the machine learning development lifecycle. By the end of this machine learning book, you’ll be able to build and deploy AutoML models that are not only accurate, but also increase productivity, allow interoperability, and minimize feature engineering tasks.
Table of Contents (15 chapters)
1
Section 1: Introduction to Automated Machine Learning
5
Section 2: AutoML with Cloud Platforms
12
Section 3: Applied Automated Machine Learning

Hyperparameter optimization

Due to its ubiquity and ease of framing, hyperparameter optimization is sometimes regarded as being synonymous with automated ML. Depending on the search space, if you include features, hyperparameter optimization, also dubbed hyperparameter tuning and hyperparameter learning, is known as automated pipeline learning. All these terms can be bit daunting for something as simple as finding the right parameters for a model, but graduating students must publish, and I digress.

There are a couple of key points regarding hyperparameters that are important to note as we look further into these constructs. It is well established that the default parameters are not optimized. Olson et al., in their NIH paper, demonstrated how the default parameters are almost always a bad idea. Olson mentions that "Tuning often improves an algorithm's accuracy by 3–5%, depending on the algorithm…. In some cases, parameter tuning led to CV accuracy improvements...