Book Image

Data Science Using Python and R

By : Chantal D. Larose, Daniel T. Larose
Book Image

Data Science Using Python and R

By: Chantal D. Larose, Daniel T. Larose

Overview of this book

Data science is hot. Bloomberg named a data scientist as the ‘hottest job in America’. Python and R are the top two open-source data science tools using which you can produce hands-on solutions to real-world business problems, using state-of-the-art techniques. Each chapter in the book presents step-by-step instructions and walkthroughs for solving data science problems using Python and R. You’ll learn how to prepare data, perform exploratory data analysis, and prepare to model the data. As you progress, you’ll explore what are decision trees and how to use them. You’ll also learn about model evaluation, misclassification costs, naïve Bayes classification, and neural networks. The later chapters provide comprehensive information about clustering, regression modeling, dimension reduction, and association rules mining. The book also throws light on exciting new topics, such as random forests and general linear models. The book emphasizes data-driven error costs to enhance profitability, which avoids the common pitfalls that may cost a company millions of dollars. By the end of this book, you’ll have enough knowledge and confidence to start providing solutions to data science problems using R and Python.
Table of Contents (20 chapters)
Free Chapter
1
ABOUT THE AUTHORS
17
INDEX
18
END USER LICENSE AGREEMENT

6.3 THE C5.0 ALGORITHM FOR BUILDING DECISION TREES

The C5.0 algorithm is J. Ross Quinlan's extension of his own C4.5 algorithm for generating decision trees.5Unlike CART, the C5.0 algorithm is not restricted to binary splits. The 5.0 algorithm uses the concept of information gain or entropy reduction to select the optimal split. Suppose that we have a variable X whose k possible values have probabilities p1, p2, …, pk. The smallest number of bits, on average per symbol, needed to transmit a stream of symbols representing the values of X observed is called the entropy of X, defined as

equationH(X)=jpjlog2(pj)--

C5.0 uses entropy as follows. Suppose that we have a candidate split S, which partitions the training data set T into several subsets, T1, T2, …, Tk. The mean information requirement can then be calculated as the weighted sum of the entropies for the individual subsets, as follows:

equationHS(T)=i=1kpiHS(Ti)--

where Pi represents the proportion of...