Book Image

The Kaggle Workbook

By : Konrad Banachewicz, Luca Massaron
5 (1)
Book Image

The Kaggle Workbook

5 (1)
By: Konrad Banachewicz, Luca Massaron

Overview of this book

More than 80,000 Kaggle novices currently participate in Kaggle competitions. To help them navigate the often-overwhelming world of Kaggle, two Grandmasters put their heads together to write The Kaggle Book, which made plenty of waves in the community. Now, they’ve come back with an even more practical approach based on hands-on exercises that can help you start thinking like an experienced data scientist. In this book, you’ll get up close and personal with four extensive case studies based on past Kaggle competitions. You’ll learn how bright minds predicted which drivers would likely avoid filing insurance claims in Brazil and see how expert Kagglers used gradient-boosting methods to model Walmart unit sales time-series data. Get into computer vision by discovering different solutions for identifying the type of disease present on cassava leaves. And see how the Kaggle community created predictive algorithms to solve the natural language processing problem of subjective question-answering. You can use this workbook as a supplement alongside The Kaggle Book or on its own alongside resources available on the Kaggle website and other online communities. Whatever path you choose, this workbook will help make you a formidable Kaggle competitor.
Table of Contents (7 chapters)

What this book covers

Chapter 1, The Most Renowned Tabular Competition – Porto Seguro’s Safe Driver Prediction. In this competition, you are asked to solve a common problem in insurance to figure out who is going to have an auto insurance claim in the next year. We guide you in properly using LightGBM, denoising autoencoders, and how to effectively blend them.

Chapter 2, The Makridakis Competitions – M5 on Kaggle for Accuracy and Uncertainty. In this competition based on Walmart’s daily sales time series of items hierarchically arranged into departments, categories, and stores spread across three U.S. states, we recreate the 4th-place solution’s ideas from Monsaraida to demonstrate how we can effectively use LightGBM for this time series problem.

Chapter 3, Vision Competition – Cassava Leaf Disease Classification. In this contest, the participants were tasked with classifying crowdsourced photos of cassava plants grown by farmers in Uganda. We use the multiclass problem to demonstrate how to build a complete pipeline for image classification and show how this baseline can be utilized to construct a competitive solution using a vast array of possible extensions.

Chapter 4, NLP Competition – Google Quest Q&A Labeling, discusses a contest focused on predicting human responders’ evaluations of subjective aspects of a question-answer pair, where an understanding of context was crucial. Casting the challenge as a multiclass classification problem, we build a baseline solution exploring the semantic characteristics of a corpus, followed by an examination of more advanced methods that were necessary for leaderboard ascent.