Book Image

Principles of Data Science

Book Image

Principles of Data Science

Overview of this book

Need to turn your skills at programming into effective data science skills? Principles of Data Science is created to help you join the dots between mathematics, programming, and business analysis. With this book, you’ll feel confident about asking—and answering—complex and sophisticated questions of your data to move from abstract and raw statistics to actionable ideas. With a unique approach that bridges the gap between mathematics and computer science, this books takes you through the entire data science pipeline. Beginning with cleaning and preparing data, and effective data mining strategies and techniques, you’ll move on to build a comprehensive picture of how every piece of the data science puzzle fits together. Learn the fundamentals of computational mathematics and statistics, as well as some pseudocode being used today by data scientists and analysts. You’ll get to grips with machine learning, discover the statistical models that help you take control and navigate even the densest datasets, and find out how to create powerful visualizations that communicate what your data means.
Table of Contents (20 chapters)
Principles of Data Science
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

K folds cross-validation


K folds cross-validation is a much better estimator of our model's performance, even more so than our train-test split. Here's how it works:

  1. We will take a finite number of equal slices of our data (usually 3, 5, or 10). Assume that this number is called k.

  2. For each "fold" of the cross-validation, we will treat k-1 of the sections as the training set, and the remaining section as our test set.

  3. For the remaining folds, a different arrangement of k-1 sections is considered for our training set and a different section is our training set.

  4. We compute a set metric for each fold of the cross-validation.

  5. We average our scores at the end.

Cross-validation is effectively using multiple train-test splits being done on the same dataset. This is done for a few reasons, but mainly because cross-validation is the most honest estimate of our model's out of the sample error.

To explain this visually, let's look at our mammal brain and body weight example for a second. The following code...