Book Image

Deep Learning with R for Beginners

By : Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado
Book Image

Deep Learning with R for Beginners

By: Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado

Overview of this book

Deep learning has a range of practical applications in several domains, while R is the preferred language for designing and deploying deep learning models. This Learning Path introduces you to the basics of deep learning and even teaches you to build a neural network model from scratch. As you make your way through the chapters, you’ll explore deep learning libraries and understand how to create deep learning models for a variety of challenges, right from anomaly detection to recommendation systems. The Learning Path will then help you cover advanced topics, such as generative adversarial networks (GANs), transfer learning, and large-scale deep learning in the cloud, in addition to model optimization, overfitting, and data augmentation. Through real-world projects, you’ll also get up to speed with training convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory networks (LSTMs) in R. By the end of this Learning Path, you’ll be well-versed with deep learning and have the skills you need to implement a number of deep learning concepts in your research work or projects.
Table of Contents (23 chapters)
Title Page
Copyright and Credits
About Packt
Contributors
Preface
Index

Our first examples


Let's begin with a few simple examples to understand what is going on. 

For some of us, it's very easy to get tempted to try the shiniest algorithms and do hyper-parameter optimization instead of the less glamorous step-by-step understanding. 

A simple 2D example

Let's develop our intuition of how the autoencoder works with a simple two-dimensional example. 

We first generate 10,000 points coming from a normal distribution with mean 0 and variance 1:

library(MASS)
library(keras)
Sigma <- matrix(c(1,0,0,1),2,2)
n_points <- 10000
df <- mvrnorm(n=n_points, rep(0,2), Sigma)
df <- as.data.frame(df)

The distribution of the values should look as follows:

Distribution of the variable V1 we just generated; the variable V2 looks fairly similar.

Distribution of the variables V1 and V2 we generated. 

Let's spice things up a bit and add some outliers to the mixture. In many fraud applications, the fraud rate is about 1–5%, so we generate 1% of our samples as coming from a normal...