Book Image

Supervised Machine Learning with Python

By : Taylor Smith
Book Image

Supervised Machine Learning with Python

By: Taylor Smith

Overview of this book

Supervised machine learning is used in a wide range of sectors, such as finance, online advertising, and analytics, to train systems to make pricing predictions, campaign adjustments, customer recommendations, and much more by learning from the data that is used to train it and making decisions on its own. This makes it crucial to know how a machine 'learns' under the hood. This book will guide you through the implementation and nuances of many popular supervised machine learning algorithms, and help you understand how they work. You’ll embark on this journey with a quick overview of supervised learning and see how it differs from unsupervised learning. You’ll then explore parametric models, such as linear and logistic regression, non-parametric methods, such as decision trees, and a variety of clustering techniques that facilitate decision-making and predictions. As you advance, you'll work hands-on with recommender systems, which are widely used by online companies to increase user interaction and enrich shopping potential. Finally, you’ll wrap up with a brief foray into neural networks and transfer learning. By the end of this book, you’ll be equipped with hands-on techniques and will have gained the practical know-how you need to quickly and effectively apply algorithms to solve new problems.
Table of Contents (11 chapters)
Title Page
Copyright and Credits
About Packt
Contributor
Preface
Index

The bias/variance trade-off


In this section, we're going to continue our discussion of error due to bias, and introduce a new source of error called variance. We will begin by clarifying what we mean by error terms and then dissect various sources of modeling errors.

Error terms

One of the central topics of model building is reducing error. However, there are several types of errors, two of which we have control over to some extent. These are called bias and variance. There is a trade-off in the ability for a model to minimize either bias or variance, and this is called the bias-variance trade-off or the bias-variance dilemma.

Some models do well at controlling both to an extent. However, this is a dilemma that, for the most part, is always going to be present in your modeling considerations.

Error due to bias

High bias can also be called underfitting or over-generalization. High bias generally leads to an inflexible model that misses the true relationship between features in the target function...