Book Image

R Deep Learning Projects

Book Image

R Deep Learning Projects

Overview of this book

R is a popular programming language used by statisticians and mathematicians for statistical analysis, and is popularly used for deep learning. Deep Learning, as we all know, is one of the trending topics today, and is finding practical applications in a lot of domains. This book demonstrates end-to-end implementations of five real-world projects on popular topics in deep learning such as handwritten digit recognition, traffic light detection, fraud detection, text generation, and sentiment analysis. You'll learn how to train effective neural networks in R—including convolutional neural networks, recurrent neural networks, and LSTMs—and apply them in practical scenarios. The book also highlights how neural networks can be trained using GPU capabilities. You will use popular R libraries and packages—such as MXNetR, H2O, deepnet, and more—to implement the projects. By the end of this book, you will have a better understanding of deep learning concepts and techniques and how to use them in a practical setting.
Table of Contents (11 chapters)

Variational Autoencoders


Variational Autoencoders (VAE) are a more recent take on the autoencoding problem. Unlike autoencoders, which learn a compressed representation of the data, Variational Autoencoders learn the random process that generates such data, instead of learning an essentially arbitrary function as we previously did with our neural networks.

VAEs have also an encoder and decoder part. The encoder learns the mean and standard deviation of a normal distribution that is assumed to have generated the data. The mean and standard deviation are called latent variables because they are not observed explicitly, rather inferred from the data. 

The decoder part of VAEs maps back these latent space points into the data. As before, we need a loss function to measure the difference between the original inputs and their reconstruction. Sometimes an extra term is added, called the Kullback-Leibler divergence, or simply KL divergence. The KL divergence computes, roughly, how much a probability...