RNNs are not perfect, there are two main issues namely exploding gradients and vanishing gradients that they suffer from. To understand the issues, let's first understand what a gradient means. A gradient is a partial derivative with respect to its inputs. In simple layman's terms, a gradient measures how much the output of a function changes, if one were to change the inputs a little bit.
Exploding gradients relate to a situation where the BPTT algorithm assigns an insanely high importance to the weights, without a rationale. The problem results in an unstable network. In extreme situations, the values of weights can become so large that the values overflow and result in NaN values.
The exploding gradients problem can be detected through observing the following subtle signs while training the network:
- The model weights quickly become very large during training
- The model weights become NaN values during training
- The error gradient...