So, all we are actually doing here is generating an output by transforming a sample taken from the probability distribution representing the encoded latent space. In the last chapter, we saw how to produce such a latent space from some input data using encoding functions. In this chapter, we will see how to learn a continuous latent space (l), then sample from it to generate novel outputs. To do this, we essentially learn a differentiable generator function, g (l ; θ(g) ), which transforms samples from a continuous latent space (l) to generate an output. Here, this function itself is what is being approximated by the neural network.
The family of generative networks includes both Variational Autoencoders (VAEs) as well as Generative Adversarial Networks (GANs). As we mentioned before, there exist many types of generative models...