LSTM is a particular architecture of RNN, originally conceived by Hochreiter and Schmidhuber in 1997. This type of neural network has been recently rediscovered in the context of deep learning because it is free from the problem of vanishing gradient, and in practice it offers excellent results and performance.
The vanishing gradient problem affects the training of ANNs with gradient-based learning methods. In gradient-based methods such as backpropagation, weights are adjusted proportionally to the gradient of the error. Because of the way in which the aforementioned gradients are calculated, we obtain the effect that their module decreases exponentially, proceeding towards the deepest layers. The problem is that in some cases, the gradient will be vanishingly small, effectively preventing the weight from changing its value. In the worst case...