Book Image

Advanced Deep Learning with R

By : Bharatendra Rai
Book Image

Advanced Deep Learning with R

By: Bharatendra Rai

Overview of this book

Deep learning is a branch of machine learning based on a set of algorithms that attempt to model high-level abstractions in data. Advanced Deep Learning with R will help you understand popular deep learning architectures and their variants in R, along with providing real-life examples for them. This deep learning book starts by covering the essential deep learning techniques and concepts for prediction and classification. You will learn about neural networks, deep learning architectures, and the fundamentals for implementing deep learning with R. The book will also take you through using important deep learning libraries such as Keras-R and TensorFlow-R to implement deep learning algorithms within applications. You will get up to speed with artificial neural networks, recurrent neural networks, convolutional neural networks, long short-term memory networks, and more using advanced examples. Later, you'll discover how to apply generative adversarial networks (GANs) to generate new images; autoencoder neural networks for image dimension reduction, image de-noising and image correction and transfer learning to prepare, define, train, and model a deep neural network. By the end of this book, you will be ready to implement your knowledge and newly acquired skills for applying deep learning algorithms in R through real-world examples.
Table of Contents (20 chapters)
Free Chapter
1
Section 1: Revisiting Deep Learning Basics
3
Section 2: Deep Learning for Prediction and Classification
6
Section 3: Deep Learning for Computer Vision
12
Section 4: Deep Learning for Natural Language Processing
17
Section 5: The Road Ahead

Performance optimization tips and best practices

In this section, we will explore changes we can make to the model architecture and other settings to improve author classification performance. We will carry out two experiments, and, for both of these two experiments, we will increase the number of most frequent words from 500 to 1,500 and increase the length of the sequences of integers from 300 to 400. For both experiments, we will also add a dropout layer after the pooling layer.

Experimenting with reduced batch size

The code that we'll be using for this experiment is as follows:

# Model architecture
model <- keras_model_sequential() %>%
layer_embedding(input_dim = 1500,
output_dim...