So far, we have seen the ANNs where the input signals are propagated to the output layer in the forward pass and the weights are optimized in a recursive manner in order to train the model for generalizing the new input data based on the training set provided as input.
A special case real-life problem is optimizing the ANN for training sequences of data, for example, text, speech, or any other form of audio input. In simple terms, when the output of one forward propagation is fed as input for the next iteration of training, the network topology is called a recurrent neural network (RNN).
In the case of the feed-forward networks, we consider independent sets of inputs. In the case of image recognition problems, we have input images that are independent of each other in terms of the input dataset. In this case, we consider the pixel matrix for the input image. The input data for one image does not influence the input for the next image that the ANN...