Avoiding overfitting in neural networks
Let's understand the constituents of overfitting and how to avoid it in neural networks. Nitesh Srivastava, Geoffrey Hinton, et al. published a paper, https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf, in 2014, which shows cases on how to avoid overfitting.
Problem statement
Deep neural networks contain nonlinear hidden layers, and this makes them expressive models that can learn very complicated relationships between inputs and outputs. However, these complicated relationships will be the result of sampling noise. These complicated relationships might not exist in test data, leading to overfitting. Many techniques and methods have been developed to reduce this noise. These include stopping the training as soon as performance on a validation set starts getting worse, introducing weight penalties such as L1 and L2 regularization, and soft weight sharing (Nowlan and Hinton, 1992).
Solution
Dropout is a technique that addresses performance issues of...