When translating a sentence, describing the content of an image, annotating a sentence, or transcribing an audio, it sounds natural to focus on one part at a time of the input sentence or image, to get the sense of the block and transform it, before moving to the next part, under a certain order for global understanding.
For example, in the German language, under certain conditions, verbs come at the end of the sentence, so, when translating to English, once the subject has been read and translated, a good machine translation neural network could move its focus to the end of the sentence to find the verb and translate it into English. This process of matching input positions to current output predictions is possible through the mechanism of attention.
First, let's come back to classification networks that have been designed with a softmax layer (see Chapter 2, Classifying Handwritten Digits with a Feedforward Network) that outputs a non-negative weight...