Book Image

R Deep Learning Essentials. - Second Edition

By : Mark Hodnett, Joshua F. Wiley
Book Image

R Deep Learning Essentials. - Second Edition

By: Mark Hodnett, Joshua F. Wiley

Overview of this book

Deep learning is a powerful subset of machine learning that is very successful in domains such as computer vision and natural language processing (NLP). This second edition of R Deep Learning Essentials will open the gates for you to enter the world of neural networks by building powerful deep learning models using the R ecosystem. This book will introduce you to the basic principles of deep learning and teach you to build a neural network model from scratch. As you make your way through the book, you will explore deep learning libraries, such as Keras, MXNet, and TensorFlow, and create interesting deep learning models for a variety of tasks and problems, including structured data, computer vision, text data, anomaly detection, and recommendation systems. You’ll cover advanced topics, such as generative adversarial networks (GANs), transfer learning, and large-scale deep learning in the cloud. In the concluding chapters, you will learn about the theoretical concepts of deep learning projects, such as model optimization, overfitting, and data augmentation, together with other advanced topics. By the end of this book, you will be fully prepared and able to implement deep learning concepts in your research work or projects.
Table of Contents (13 chapters)

Training an auto-encoder in R

In this section, we are going to train an auto-encoder in R and show you that it can be used as a dimensionality reduction technique. We will compare it with the approach we took in Chapter 2, Training a Prediction Model, where we used PCA to find the principal components in the image data. In that example, we used PCA and found that 23 factors was sufficient to explain 50% of the variance in the data. We built a neural network model using just these 23 factors to classify a dataset with either 5 or 6. We got 97.86% accuracy in that example.

We are going to follow a similar process in this example, and we will use the MINST dataset again. The following code from Chapter8/encoder.R loads the data. We will use half the data for training an auto-encoder and the other half will be used to build a classification model to evaluate how good the auto-encoder...