K folds cross-validation is a much better estimator of our model's performance, even more so than our train-test split. Here's how it works:
We will take a finite number of equal slices of our data (usually 3, 5, or 10). Assume that this number is called k.
For each "fold" of the cross-validation, we will treat k-1 of the sections as the training set, and the remaining section as our test set.
For the remaining folds, a different arrangement of k-1 sections is considered for our training set and a different section is our training set.
We compute a set metric for each fold of the cross-validation.
We average our scores at the end.
Cross-validation is effectively using multiple train-test splits being done on the same dataset. This is done for a few reasons, but mainly because cross-validation is the most honest estimate of our model's out of the sample error.
To explain this visually, let's look at our mammal brain and body weight example for a second. The following code...