Book Image

Data Science Algorithms in a Week - Second Edition

By : David Natingga
Book Image

Data Science Algorithms in a Week - Second Edition

By: David Natingga

Overview of this book

Machine learning applications are highly automated and self-modifying, and continue to improve over time with minimal human intervention, as they learn from the trained data. To address the complex nature of various real-world data problems, specialized machine learning algorithms have been developed. Through algorithmic and statistical analysis, these models can be leveraged to gain new knowledge from existing data as well. Data Science Algorithms in a Week addresses all problems related to accurate and efficient data classification and prediction. Over the course of seven days, you will be introduced to seven algorithms, along with exercises that will help you understand different aspects of machine learning. You will see how to pre-cluster your data to optimize and classify it for large datasets. This book also guides you in predicting data based on existing trends in your dataset. This book covers algorithms such as k-nearest neighbors, Naive Bayes, decision trees, random forest, k-means, regression, and time-series analysis. By the end of this book, you will understand how to choose machine learning algorithms for clustering, classification, and regression and know which is best suited for your problem
Table of Contents (16 chapters)
Title Page
Packt Upsell
Glossary of Algorithms and Methods in Data Science

ID3 algorithm – decision tree construction

The ID3 algorithm constructs a decision tree from data based on the information gain. In the beginning, we start with the set, S. The data items in the set, S, have various properties, according to which we can partition the set, S. If an attribute, A, has the values {v1, ..., vn}, then we partition the set, S, into the sets S1, ..., Sn, where the set, Si, is a subset of the set, S, where the elements have the value, vi, for the attribute, A.

If each element in the set, S, has the attributes A1, ..., Am, then we can partition the set, S, according to any of the possible attributes. The ID3 algorithm partitions the set, S, according to the attribute that yields the highest information gain. Now suppose that it has the attribute, A1. Then, for the set, S, we have the partitions S1, ..., Sn, where A1 has the possible values {v1,..., vn}.

Since we have not constructed a tree yet, we first place a root node. For every partition of S, we place a new branch...