In this first chapter, you have dealt with a classical tabular competition. By reading the notebooks and discussions of the competition, we have come up with a simple solution involving just two models that can be easily blended. In particular, we have offered an example of how to use a denoising autoencoder in order to produce alternative data processing, particularly useful when operating with neural networks for tabular data. By understanding and replicating solutions from past competitions, you can quickly build up your core competencies on Kaggle competitions and quickly become able to perform consistently higher in more recent competitions and challenges.
In the next chapter, we will explore another tabular competition from Kaggle, this time revolving around a complex prediction problem with time series.
Join our book’s Discord space
Join our Discord community to meet like-minded people and learn alongside more than 2000 members at: