Book Image

Deep Learning with R for Beginners

By : Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado
Book Image

Deep Learning with R for Beginners

By: Mark Hodnett, Joshua F. Wiley, Yuxi (Hayden) Liu, Pablo Maldonado

Overview of this book

Deep learning has a range of practical applications in several domains, while R is the preferred language for designing and deploying deep learning models. This Learning Path introduces you to the basics of deep learning and even teaches you to build a neural network model from scratch. As you make your way through the chapters, you’ll explore deep learning libraries and understand how to create deep learning models for a variety of challenges, right from anomaly detection to recommendation systems. The Learning Path will then help you cover advanced topics, such as generative adversarial networks (GANs), transfer learning, and large-scale deep learning in the cloud, in addition to model optimization, overfitting, and data augmentation. Through real-world projects, you’ll also get up to speed with training convolutional neural networks (CNNs), recurrent neural networks (RNNs), and long short-term memory networks (LSTMs) in R. By the end of this Learning Path, you’ll be well-versed with deep learning and have the skills you need to implement a number of deep learning concepts in your research work or projects.
Table of Contents (23 chapters)
Title Page
Copyright and Credits
About Packt
Contributors
Preface
Index

Training an auto-encoder in R


In this section, we are going to train an auto-encoder in R and show you that it can be used as a dimensionality reduction technique. We will compare it with the approach we took in Chapter 2, Training a Prediction Model, where we used PCA to find the principal components in the image data. In that example, we used PCA and found that 23 factors was sufficient to explain 50% of the variance in the data. We built a neural network model using just these 23 factors to classify a dataset with either 5 or 6. We got 97.86% accuracy in that example.

We are going to follow a similar process in this example, and we will use the MINST dataset again. The following code from Chapter8/encoder.R loads the data. We will use half the data for training an auto-encoder and the other half will be used to build a classification model to evaluate how good the auto-encoder is at dimensionality reduction. The first part of the code is similar to what we have seen in previous examples...