The models we have learned up to now were learning using supervised learning. In this section, we will learn about autoencoders. They are feedforward, non-recurrent neural network, and learn through unsupervised learning. They are the latest buzz, along with generative adversarial networks, and we can find applications in image reconstruction, clustering, machine translation, and much more. They were initially proposed in the 1980s by Geoffrey E. Hinton and the PDP group (http://www.cs.toronto.edu/~fritz/absps/clp.pdf).
The autoencoder basically consists of two cascaded neural networks—the first network acts as an encoder; it takes the inputx and encodes it using a transformationhto encoded signaly, shown in the following equation:
The second neural network uses the encoded signalyas its input and performs another transformationfto get a reconstructed signalr, shown as follows:
The loss function is the MSE with error e defined as the difference between the original input x and...