5.3 Familiar probabilistic concepts from deep learning
While this book introduces many concepts that may be unfamiliar, you may find that some ideas discussed here are familiar. In particular, Variational Inference (VI) is something you may be familiar with due to its use in Variational Autoencoders (VAEs).
As a quick refresher, VAEs are generative models that learn encodings that can be used to generate plausible data. Much like standard autoencoders, VAEs comprise an encoder-decoder architecture.
Figure 5.1: Illustration of autoencoder architecture
With a standard autoencoder, the model learns a mapping from the encoder to the latent space, and then from the latent space to the decoder.
As we see here, our output is simply defined as x = fd(z), where our encoding z is simply: z = fe(x), where fe() and fd() are our encoder and decoder functions, respectively. If we want to generate new data using values in our latent space, we could simply inject some random values into the...