Book Image

Supervised Machine Learning with Python

By : Taylor Smith
Book Image

Supervised Machine Learning with Python

By: Taylor Smith

Overview of this book

Supervised machine learning is used in a wide range of sectors, such as finance, online advertising, and analytics, to train systems to make pricing predictions, campaign adjustments, customer recommendations, and much more by learning from the data that is used to train it and making decisions on its own. This makes it crucial to know how a machine 'learns' under the hood. This book will guide you through the implementation and nuances of many popular supervised machine learning algorithms, and help you understand how they work. You’ll embark on this journey with a quick overview of supervised learning and see how it differs from unsupervised learning. You’ll then explore parametric models, such as linear and logistic regression, non-parametric methods, such as decision trees, and a variety of clustering techniques that facilitate decision-making and predictions. As you advance, you'll work hands-on with recommender systems, which are widely used by online companies to increase user interaction and enrich shopping potential. Finally, you’ll wrap up with a brief foray into neural networks and transfer learning. By the end of this book, you’ll be equipped with hands-on techniques and will have gained the practical know-how you need to quickly and effectively apply algorithms to solve new problems.
Table of Contents (11 chapters)
Title Page
Copyright and Credits
About Packt
Contributor
Preface
Index

Decision trees


In the previous section, we computed the information gained for a given split. Recall that it's computed or calculated by computing the Gini impurity for the parent node in each LeafNode. A higher information again is better, which means we have successfully reduced the impurities of the child nodes with our split. However, we need to know how a candidate split is produced to be evaluated.

For each split, beginning with the root, the algorithm will scan all the features in the data, selecting a random number of values for each. There are various strategies to select these values. For the general use case, we will describe and select a k random approach:

  • For each of the sample values in each feature, we simulate a candidate split
  • Values above the sampled value go to one direction, say left, and values above that go the other direction, that is, to the right
  • Now, for each candidate split, we're going to compute the information gain, and select the feature value combination that...