Book Image

The Data Science Workshop - Second Edition

By : Anthony So, Thomas V. Joseph, Robert Thas John, Andrew Worsley, Dr. Samuel Asare
5 (1)
Book Image

The Data Science Workshop - Second Edition

5 (1)
By: Anthony So, Thomas V. Joseph, Robert Thas John, Andrew Worsley, Dr. Samuel Asare

Overview of this book

Where there’s data, there’s insight. With so much data being generated, there is immense scope to extract meaningful information that’ll boost business productivity and profitability. By learning to convert raw data into game-changing insights, you’ll open new career paths and opportunities. The Data Science Workshop begins by introducing different types of projects and showing you how to incorporate machine learning algorithms in them. You’ll learn to select a relevant metric and even assess the performance of your model. To tune the hyperparameters of an algorithm and improve its accuracy, you’ll get hands-on with approaches such as grid search and random search. Next, you’ll learn dimensionality reduction techniques to easily handle many variables at once, before exploring how to use model ensembling techniques and create new features to enhance model performance. In a bid to help you automatically create new features that improve your model, the book demonstrates how to use the automated feature engineering tool. You’ll also understand how to use the orchestration and scheduling workflow to deploy machine learning models in batch. By the end of this book, you’ll have the skills to start working on data science projects confidently. By the end of this book, you’ll have the skills to start working on data science projects confidently.
Table of Contents (16 chapters)
Preface
12
12. Feature Engineering

What Are Hyperparameters?

Hyperparameters can be thought of as a set of dials and switches for each estimator that change how the estimator works to explain relationships in the data.

Have a look at Figure 8.1:

Figure 8.1: How hyperparameters work

If you read from left to right in the preceding figure, you can see that during the tuning process we change the value of the hyperparameter, which results in a change to the estimator. This in turn causes a change in model performance. Our objective is to find hyperparameterization that leads to the best model performance. This will be the optimal hyperparameterization.

Estimators can have hyperparameters of varying quantities and types, which means that sometimes you can be faced with a very large number of possible hyperparameterizations to choose for an estimator.

For instance, scikit-learn's implementation of the SVM classifier (sklearn.svm.SVC), which you will be introduced to later in the chapter...