Book Image

Python Machine Learning By Example - Third Edition

By : Yuxi (Hayden) Liu
Book Image

Python Machine Learning By Example - Third Edition

By: Yuxi (Hayden) Liu

Overview of this book

Python Machine Learning By Example, Third Edition serves as a comprehensive gateway into the world of machine learning (ML). With six new chapters, on topics including movie recommendation engine development with Naïve Bayes, recognizing faces with support vector machine, predicting stock prices with artificial neural networks, categorizing images of clothing with convolutional neural networks, predicting with sequences using recurring neural networks, and leveraging reinforcement learning for making decisions, the book has been considerably updated for the latest enterprise requirements. At the same time, this book provides actionable insights on the key fundamentals of ML with Python programming. Hayden applies his expertise to demonstrate implementations of algorithms in Python, both from scratch and with libraries. Each chapter walks through an industry-adopted application. With the help of realistic examples, you will gain an understanding of the mechanics of ML techniques in areas such as exploratory data analysis, feature engineering, classification, regression, clustering, and NLP. By the end of this ML Python book, you will have gained a broad picture of the ML ecosystem and will be well-versed in the best practices of applying ML techniques to solve problems.
Table of Contents (17 chapters)
15
Other Books You May Enjoy
16
Index

Implementing a decision tree from scratch

We develop the CART tree algorithm by hand on a toy dataset as follows:

Figure 4.8: An example of ad data

To begin with, we decide on the first splitting point, the root, by trying out all possible values for each of the two features. We utilize the weighted_impurity function we just defined to calculate the weighted Gini Impurity for each possible combination, as follows:

Gini(interest, tech) = weighted_impurity([[1, 1, 0],
    [0, 0, 0, 1]]) = 0.405

Here, if we partition according to whether the user interest is tech, we have the 1st, 5th, and 6th samples for one group and the remaining samples for another group. Then the classes for the first group are [1, 1, 0], and the classes for the second group are [0, 0, 0, 1]:

Gini(interest, Fashion) = weighted_impurity([[0, 0],
    [1, 0, 1, 0, 1]]) = 0.343

Here, if we partition according to whether the user's interest is fashion, we have the 2nd and...