Book Image

Data Science Using Python and R

By : Chantal D. Larose, Daniel T. Larose
Book Image

Data Science Using Python and R

By: Chantal D. Larose, Daniel T. Larose

Overview of this book

Data science is hot. Bloomberg named a data scientist as the ‘hottest job in America’. Python and R are the top two open-source data science tools using which you can produce hands-on solutions to real-world business problems, using state-of-the-art techniques. Each chapter in the book presents step-by-step instructions and walkthroughs for solving data science problems using Python and R. You’ll learn how to prepare data, perform exploratory data analysis, and prepare to model the data. As you progress, you’ll explore what are decision trees and how to use them. You’ll also learn about model evaluation, misclassification costs, naïve Bayes classification, and neural networks. The later chapters provide comprehensive information about clustering, regression modeling, dimension reduction, and association rules mining. The book also throws light on exciting new topics, such as random forests and general linear models. The book emphasizes data-driven error costs to enhance profitability, which avoids the common pitfalls that may cost a company millions of dollars. By the end of this book, you’ll have enough knowledge and confidence to start providing solutions to data science problems using R and Python.
Table of Contents (20 chapters)
Free Chapter
1
ABOUT THE AUTHORS
17
INDEX
18
END USER LICENSE AGREEMENT

12.8 VALIDATION OF THE PRINCIPAL COMPONENTS

As with any other data science method, the results of the PCA should be validated, using the test data set. Figure 12.11 shows the proportions of variance explained by all five components, with percentages not much different from the training set results in Figure 12.8. The four rotated components for the test set, shown in Figure 12.12, are similar to those for the training set from Figure 12.10b.

No alt text required.

Figure 12.11 Proportions of variance explained from R for the test data set.

No alt text required.

Figure 12.12 Component weights from R for the test data set.

So, did PCA alleviate our multicollinearity problem? We can check by examining

  1. The correlations among the four components.
  2. The predictor VIF for the regression of the response on the components.

The correlation matrix for the principal components is shown in Figure 12.13. All correlations are zero, meaning that the components are uncorrelated. Finally, we obtain the VIFs for the regression of Sales per...