Book Image

The Kaggle Book

By : Konrad Banachewicz, Luca Massaron
5 (2)
Book Image

The Kaggle Book

5 (2)
By: Konrad Banachewicz, Luca Massaron

Overview of this book

Millions of data enthusiasts from around the world compete on Kaggle, the most famous data science competition platform of them all. Participating in Kaggle competitions is a surefire way to improve your data analysis skills, network with an amazing community of data scientists, and gain valuable experience to help grow your career. The first book of its kind, The Kaggle Book assembles in one place the techniques and skills you’ll need for success in competitions, data science projects, and beyond. Two Kaggle Grandmasters walk you through modeling strategies you won’t easily find elsewhere, and the knowledge they’ve accumulated along the way. As well as Kaggle-specific tips, you’ll learn more general techniques for approaching tasks based on image, tabular, textual data, and reinforcement learning. You’ll design better validation schemes and work more comfortably with different evaluation metrics. Whether you want to climb the ranks of Kaggle, build some more data science skills, or improve the accuracy of your existing models, this book is for you. Plus, join our Discord Community to learn along with more than 1,000 members and meet like-minded people!
Table of Contents (20 chapters)
Preface
1
Part I: Introduction to Competitions
6
Part II: Sharpening Your Skills for Competitions
15
Part III: Leveraging Competitions for Your Career
18
Other Books You May Enjoy
19
Index

Stacking models together

Stacking was first mentioned in David Wolpert’s paper (Wolpert, D. H. Stacked generalization. Neural networks 5.2 – 1992), but it took years before the idea become widely accepted and common (only with release 0.22 in December 2019, for instance, has Scikit-learn implemented a stacking wrapper). This was due principally to the Netflix competition first, and to Kaggle competitions afterward.

In stacking, you always have a meta-learner. This time, however, it is not trained on a holdout, but on the entire training set, thanks to the out-of-fold (OOF) prediction strategy. We already discussed this strategy in Chapter 6, Designing Good Validation. In OOF prediction, you start from a replicable k-fold cross-validation split. Replicable means that, by recording the cases in each training and testing sets at each round or by reproducibility assured by a random seed, you can replicate the same validation scheme for each model you need to be part...