## Variational autoencoders

Autoencoders are basically an approximation for PCA. However, they can be extended to become generative models. Given an input, **variational autoencoders** (**VAEs**) can create encoding *distributions*. This means that for a fraud case, the encoder would produce a distribution of possible encodings that all represent the most important characteristics of the transaction. The decoder would then turn all of the encodings back into the original transaction.

This is useful since it allows us to generate data about transactions. One problem of fraud detection that we discovered earlier is that there are not all that many fraudulent transactions. Therefore, by using a VAE, we can sample any amount of transaction encodings and train our classifier with more fraudulent transaction data.

So, how do VAEs do it? Instead of having just one compressed representation vector, a VAE has two: one for the mean encoding, , and one for the standard deviation of this encoding, :

Both...