Building and comparing stochastic encoders and decoders
Stochastic encoders fall into the domain of generative modeling, where the objective is to learn join probability P(X) over given data X transformed into another high-dimensional space. For example, we want to learn about images and produce similar, but not exactly the same, images by learning about pixel dependencies and distribution. One of the popular approaches in generative modeling is Variational autoencoder (VAE), which combines deep learning with statistical inference by making a strong distribution assumption on h ~ P(h), such as Gaussian or Bernoulli. For a given weight W, the X can be sampled from the distribution as Pw(X|h). An example of VAE architecture is shown in the following diagram:
The cost function of VAE is based on log likelihood maximization. The cost function consists of reconstruction and regularization error terms:
Cost = Reconstruction Error + Regularization Error
The reconstruction error is how well we could...