Book Image

Data Science Algorithms in a Week - Second Edition

By : David Natingga
Book Image

Data Science Algorithms in a Week - Second Edition

By: David Natingga

Overview of this book

Machine learning applications are highly automated and self-modifying, and continue to improve over time with minimal human intervention, as they learn from the trained data. To address the complex nature of various real-world data problems, specialized machine learning algorithms have been developed. Through algorithmic and statistical analysis, these models can be leveraged to gain new knowledge from existing data as well. Data Science Algorithms in a Week addresses all problems related to accurate and efficient data classification and prediction. Over the course of seven days, you will be introduced to seven algorithms, along with exercises that will help you understand different aspects of machine learning. You will see how to pre-cluster your data to optimize and classify it for large datasets. This book also guides you in predicting data based on existing trends in your dataset. This book covers algorithms such as k-nearest neighbors, Naive Bayes, decision trees, random forest, k-means, regression, and time-series analysis. By the end of this book, you will understand how to choose machine learning algorithms for clustering, classification, and regression and know which is best suited for your problem
Table of Contents (16 chapters)
Title Page
Packt Upsell
Contributors
Preface
Glossary of Algorithms and Methods in Data Science
Index

Playing chess – analysis with a decision tree


Let's take an example from Chapter 2, Naive Bayes, again:

Temperature

Wind

Sunshine

Play

Cold

Strong

Cloudy

No

Cold

Strong

Cloudy

No

Warm

None

Sunny

Yes

Hot

None

Sunny

No

Hot

Breeze

Cloudy

Yes

Warm

Breeze

Sunny

Yes

Cold

Breeze

Cloudy

No

Cold

None

Sunny

Yes

Hot

Strong

Cloudy

Yes

Warm

None

Cloudy

Yes

Warm

Strong

Sunny

?

 

We would like to find out whether our friend would like to play chess with us in the park. But this time, we would like to use decision trees to find the answer.

Analysis

We have the initial set, S, of the data samples, as follows:

S={(Cold,Strong,Cloudy,No),(Warm,Strong,Cloudy,No),(Warm,None,Sunny,Yes), (Hot,None,Sunny,No),(Hot,Breeze,Cloudy,Yes),(Warm,Breeze,Sunny,Yes),(Cold,Breeze,Cloudy,No),(Cold,None,Sunny,Yes),(Hot,Strong,Cloudy,Yes),(Warm,None,Cloudy,Yes)}

First, we determine the information gain for each of the three non-classifying attributes: temperature, wind, and sunshine. The possible values for temperature are Cold, Warm, and Hot. Therefore, we will partition the set...