Book Image

The Kaggle Book

By : Konrad Banachewicz, Luca Massaron
5 (2)
Book Image

The Kaggle Book

5 (2)
By: Konrad Banachewicz, Luca Massaron

Overview of this book

Millions of data enthusiasts from around the world compete on Kaggle, the most famous data science competition platform of them all. Participating in Kaggle competitions is a surefire way to improve your data analysis skills, network with an amazing community of data scientists, and gain valuable experience to help grow your career. The first book of its kind, The Kaggle Book assembles in one place the techniques and skills you’ll need for success in competitions, data science projects, and beyond. Two Kaggle Grandmasters walk you through modeling strategies you won’t easily find elsewhere, and the knowledge they’ve accumulated along the way. As well as Kaggle-specific tips, you’ll learn more general techniques for approaching tasks based on image, tabular, textual data, and reinforcement learning. You’ll design better validation schemes and work more comfortably with different evaluation metrics. Whether you want to climb the ranks of Kaggle, build some more data science skills, or improve the accuracy of your existing models, this book is for you. Plus, join our Discord Community to learn along with more than 1,000 members and meet like-minded people!
Table of Contents (20 chapters)
Part I: Introduction to Competitions
Part II: Sharpening Your Skills for Competitions
Part III: Leveraging Competitions for Your Career
Other Books You May Enjoy

Reducing the size of your data

If you are working directly on Kaggle Notebooks, you will find their limitations quite annoying and dealing with them a timesink. One of these limitations is the out-of-memory errors that will stop the execution and force you to restart the script from the beginning. This is quite common in many competitions. However, unlike deep learning competitions based on text or images where you can retrieve the data from disk in small batches and have them processed, most of the algorithms that work with tabular data require handling all the data in memory.

The most common situation is when you have uploaded the data from a CSV file using Pandas’ read_csv, but the DataFrame is too large to be handled for feature engineering and machine learning in a Kaggle Notebook. The solution is to compress the size of the Pandas DataFrame you are using without losing any information (lossless compression). This can easily be achieved using the following script derived...