Autoencoder architectures can be found in a variety of configurations such as simple autoencoders, sparse autoencoders, denoising autoencoders, and convolutional autoencoders.
- Simple autoencoder: In simple autoencoder, the hidden layers have lesser number of nodes or neurons as compared to the input. For example, in the MNIST dataset, an input of 784 features can be connected to the hidden layer of 512 nodes or 256 nodes, which is connected to the 784-feature output layer. Thus, during training, the 784 features would be learned by only 256 nodes. Simple autoencoders are also known as undercomplete autoencoders.
Simple autoencoder could be single-layer or multi-layer. Generally, single-layer autoencoder does not perform very good in production. Multi-layer autoencoder has more than one hidden layer, divided into encoder and decoder groupings. Encoder layers encode a large number of features into a smaller number of neurons, and decoder layers then decode the learned compressed...