Book Image

R Deep Learning Projects

Book Image

R Deep Learning Projects

Overview of this book

R is a popular programming language used by statisticians and mathematicians for statistical analysis, and is popularly used for deep learning. Deep Learning, as we all know, is one of the trending topics today, and is finding practical applications in a lot of domains. This book demonstrates end-to-end implementations of five real-world projects on popular topics in deep learning such as handwritten digit recognition, traffic light detection, fraud detection, text generation, and sentiment analysis. You'll learn how to train effective neural networks in R—including convolutional neural networks, recurrent neural networks, and LSTMs—and apply them in practical scenarios. The book also highlights how neural networks can be trained using GPU capabilities. You will use popular R libraries and packages—such as MXNetR, H2O, deepnet, and more—to implement the projects. By the end of this book, you will have a better understanding of deep learning concepts and techniques and how to use them in a practical setting.
Table of Contents (11 chapters)

Chapter 3. Fraud Detection with Autoencoders

In this chapter, we continue our journey into deep learning with R with autoencoders.

A classical autoencoder consists of three parts:

  • An encoding function, which compresses your data
  • A decoding function, which reconstructs data from a compressed version
  • A metric or distance, which calculates the difference between the information lost by compression on your data

We typically assume that all these involved functions are smooth enough to be able to use backpropagation or other gradient-based methods, although they need not be and we could use derivative-free methods to train them. 

Note

Autoencoding is the process of summarizing information from a potentially large feature set into a smaller feature set.

Although the compression bit might remind you of algorithms, such as the MP3 compression algorithm, an important difference is that autoencoders are data specific. An autoencoder trained in pictures of cats and dogs will likely perform poorly in pictures...