Book Image

Practical Data Science Cookbook, Second Edition - Second Edition

By : Prabhanjan Narayanachar Tattar, Bhushan Purushottam Joshi, Sean Patrick Murphy, ABHIJIT DASGUPTA, Anthony Ojeda
Book Image

Practical Data Science Cookbook, Second Edition - Second Edition

By: Prabhanjan Narayanachar Tattar, Bhushan Purushottam Joshi, Sean Patrick Murphy, ABHIJIT DASGUPTA, Anthony Ojeda

Overview of this book

As increasing amounts of data are generated each year, the need to analyze and create value out of it is more important than ever. Companies that know what to do with their data and how to do it well will have a competitive advantage over companies that don’t. Because of this, there will be an increasing demand for people that possess both the analytical and technical abilities to extract valuable insights from data and create valuable solutions that put those insights to use. Starting with the basics, this book covers how to set up your numerical programming environment, introduces you to the data science pipeline, and guides you through several data projects in a step-by-step format. By sequentially working through the steps in each chapter, you will quickly familiarize yourself with the process and learn how to apply it to a variety of situations with examples using the two most popular programming languages for data analysis—R and Python.
Table of Contents (17 chapters)
Title Page
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Preface

Decision tree for german data


We have fit a logistic regression model for the German data. Now, we will create a decision tree for it.

Getting ready

The GC2 object along with the partitioned data will be required here. Also, the fitted logistic regression model is needed.

How to do it ...

We will create the decision tree using the rpart package and its functionalities:

  1. Create the decision tree and plot it as follows:

GC_CT <-
rpart
(good_bad~.,
data=
GC_Train)


windows
(
height=
20
,
width=
20
)


plot
(GC_CT,
uniform =
TRUE
);
text
(GC_CT)

  1. The decision tree plot is given as follows:

  1. The properties of the fitted tree need to be evaluated.
  2. Check the complexity parameter and important variables of the fitted tree:

table
(GC_Train$good_bad,
predict
(GC_CT,
type=
"class"
))


##


## bad good


## bad 107 74


## good 32 386


GC_CT$cptable


## CP nsplit rel error xerror xstd


## 1 0.06262 0 1.0000 1.0000 0.06209


## 2 0.02486 3 0.8122 0.9503 0.06118


## 3 0.02210 6 0.7348 1.0055 0.06219...