Book Image

Data Science Algorithms in a Week - Second Edition

By : David Natingga
Book Image

Data Science Algorithms in a Week - Second Edition

By: David Natingga

Overview of this book

Machine learning applications are highly automated and self-modifying, and continue to improve over time with minimal human intervention, as they learn from the trained data. To address the complex nature of various real-world data problems, specialized machine learning algorithms have been developed. Through algorithmic and statistical analysis, these models can be leveraged to gain new knowledge from existing data as well. Data Science Algorithms in a Week addresses all problems related to accurate and efficient data classification and prediction. Over the course of seven days, you will be introduced to seven algorithms, along with exercises that will help you understand different aspects of machine learning. You will see how to pre-cluster your data to optimize and classify it for large datasets. This book also guides you in predicting data based on existing trends in your dataset. This book covers algorithms such as k-nearest neighbors, Naive Bayes, decision trees, random forest, k-means, regression, and time-series analysis. By the end of this book, you will understand how to choose machine learning algorithms for clustering, classification, and regression and know which is best suited for your problem
Table of Contents (16 chapters)
Title Page
Packt Upsell
Contributors
Preface
Glossary of Algorithms and Methods in Data Science
Index

Playing chess example


We will again use the examples from Chapter 2, Naive Bayes, and Chapter 3, Decision Tree, as follows:

Temperature

Wind

Sunshine

Play

Cold

Strong

Cloudy

No

Warm

Strong

Cloudy

No

Warm

None

Sunny

Yes

Hot

None

Sunny

No

Hot

Breeze

Cloudy

Yes

Warm

Breeze

Sunny

Yes

Cold

Breeze

Cloudy

No

Cold

None

Sunny

Yes

Hot

Strong

Cloudy

Yes

Warm

None

Cloudy

Yes

Warm

Strong

Sunny

?

 

However, we would like to use a random forest consisting of four random decision trees to find the result of the classification.

Analysis

We are given M=4 variables from which a feature can be classified. Thus, we choose the maximum number of the variables considered at the node to:

We are given the following features:

[['Cold', 'Strong', 'Cloudy', 'No'], ['Warm', 'Strong', 'Cloudy', 'No'], ['Warm', 'None', 'Sunny',
'Yes'], ['Hot', 'None', 'Sunny', 'No'], ['Hot', 'Breeze', 'Cloudy', 'Yes'], ['Warm', 'Breeze',
'Sunny', 'Yes'], ['Cold', 'Breeze', 'Cloudy', 'No'], ['Cold', 'None', 'Sunny', 'Yes'], ['Hot', 'Strong', 'Cloudy', 'Yes'], ['Warm', 'None', 'Cloudy', 'Yes...