We have already discussed overfitting in detail. However, let's have a recap of what we learned and what overfitting is in a neural network scenario.
By now, we are cognizant of the fact that, when a large number of parameters (in deep learning) are available at our disposal to map and explain an event, more often than not, the model built using these parameters will tend to have a good fit and try to showcase that it has the ability to describe the event properly. However, the real test of any model is always on unseen data, and we were able to assess how the model fares on such unseen data points. We expect our model to have an attribute of generalization and it will enable the model to score on test data (unseen) in alignment with the trained one. But, a number of times our model fails to generalize when it comes to the unseen data, as the model has not learned the insights and causal relationship of the event. In this scenario, one might be able to see the huge gulf of variance...