Book Image

Deep Learning with R Cookbook

By : Swarna Gupta, Rehan Ali Ansari, Dipayan Sarkar
Book Image

Deep Learning with R Cookbook

By: Swarna Gupta, Rehan Ali Ansari, Dipayan Sarkar

Overview of this book

Deep learning (DL) has evolved in recent years with developments such as generative adversarial networks (GANs), variational autoencoders (VAEs), and deep reinforcement learning. This book will get you up and running with R 3.5.x to help you implement DL techniques. The book starts with the various DL techniques that you can implement in your apps. A unique set of recipes will help you solve binomial and multinomial classification problems, and perform regression and hyperparameter optimization. To help you gain hands-on experience of concepts, the book features recipes for implementing convolutional neural networks (CNNs), recurrent neural networks (RNNs), and Long short-term memory (LSTMs) networks, as well as sequence-to-sequence models and reinforcement learning. You’ll then learn about high-performance computation using GPUs, along with learning about parallel computation capabilities in R. Later, you’ll explore libraries, such as MXNet, that are designed for GPU computing and state-of-the-art DL. Finally, you’ll discover how to solve different problems in NLP, object detection, and action identification, before understanding how to use pre-trained models in DL apps. By the end of this book, you’ll have comprehensive knowledge of DL and DL packages, and be able to develop effective solutions for different DL problems.
Table of Contents (11 chapters)

Functional API

Keras's functional API gives us more flexibility when it comes to building complex models. We can create non-sequential connections between layers, multiple inputs/outputs models, or models with shared layers or models that reuse layers. 

How to do it...

In this section, we will use the same simulated dataset that we created in the previous section of this recipe, Sequential API. Here, we will create a multi-output functional model:

  1. Let's start by importing the required library and create an input layer:
library(keras)

# input layer
inputs <- layer_input(shape = c(784))
  1. Next, we need to define two outputs:
predictions1 <- inputs %>%
layer_dense(units = 8)%>%
layer_activation('relu') %>%
layer_dense(units = 1,name = "pred_1")

predictions2 <- inputs %>%
layer_dense(units = 16)%>%
layer_activation('tanh') %>%
layer_dense(units = 1,name = "pred_2")
  1. Now, we need to define a functional Keras model:
model_functional = keras_model(inputs = inputs,outputs = c(predictions1,predictions2))

Let's look at the summary of the model:

summary(model_functional)

The following screenshot shows the model's summary:

  1. Now, we compile our model:
model_functional %>% compile(
loss = "mse",
optimizer = optimizer_rmsprop(),
metrics = list("mean_absolute_error")
)
  1. Next, we need to train the model and visualize the model's parameters:
history_functional <- model_functional %>% fit(
x_data,
list(y_data,y_data),
epochs = 30,
batch_size = 128,
validation_split = 0.2
)

Now, let's plot the model loss for the training and validation data of prediction 1 and prediction 2:

# Plot the model loss of the prediction 1 training data
plot(history_functional$metrics$pred_1_loss, main="Model Loss", xlab = "epoch", ylab="loss", col="blue", type="l")

# Plot the model loss of the prediction 1 validation data
lines(history_functional$metrics$val_pred_1_loss, col="green")

# Plot the model loss of the prediction 2 training data
lines(history_functional$metrics$pred_2_loss, col="red")

# Plot the model loss of the prediction 2 validation data
lines(history_functional$metrics$val_pred_2_loss, col="black")

# Add legend
legend("topright", c("training loss prediction 1","validation loss prediction 1","training loss prediction 2","validation loss prediction 2"), col=c("blue", "green","red","black"), lty=c(1,1))

The following plot shows the training and validation loss for both prediction 1 and prediction 2:

Now, let's plot the mean absolute error for the training and validation data of prediction 1 and prediction 2:

# Plot the model mean absolute error of the prediction 1 training data
plot(history_functional$metrics$pred_1_mean_absolute_error, main="Mean Absolute Error", xlab = "epoch", ylab="error", col="blue", type="l")

# Plot the model mean squared error of the prediction 1 validation data
lines(history_functional$metrics$val_pred_1_mean_absolute_error, col="green")

# Plot the model mean squared error of the prediction 2 training data
lines(history_functional$metrics$pred_2_mean_absolute_error, col="red")

# Plot the model mean squared error of the prediction 2 validation data
lines(history_functional$metrics$val_pred_2_mean_absolute_error, col="black")

# Add legend
legend("topright", c("training mean absolute error prediction 1","validation mean absolute error prediction 1","training mean absolute error prediction 2","validation mean absolute error prediction 2"), col=c("blue", "green","red","black"), lty=c(1,1))

The following plot shows the mean absolute errors for prediction 1 and prediction 2:

How it works...

To create a model using the functional API, we need to create the input and output layers independently, and then pass them to the keras_model() function in order to define the complete model. In the previous section, we created a model with two different output layers that share an input layer/tensor.

In step 1, we created an input tensor using the layer_input() function, which is an entry point into a computation graph that's been generated by the keras model. In step 2, we defined two different output layers. These output layers have different configurations; that is, activation functions and the number of perceptron units. The input tensor flows through these and produces two different outputs.

In step 3, we defined our model using the keras_model() function. It takes two arguments: inputs and outputs. These arguments specify which layers act as the input and output layers of the model. In the case of multi-input or multi-output models, you can use a vector of input layers and output layers, as shown here:

keras_model(inputs= c(input_layer_1, input_layer_2), outputs= c(output_layer_1, output_layer_2))

After we configured our model, we defined the learning process, trained our model, and visualized the loss and accuracy metrics. The compile() and fit() functions, which we used in steps 4 and 5, were described in detail in the How it works section of the Sequential API recipe.

There's more...

You will come across scenarios where you'll want the output of one model in order to feed it into another model alongside another input. The layer_concatenate() function can be used to do this. Let's define a new input that we will concatenate with the predictions1 output layer we defined in the How to do it section of this recipe and build a model:

# Define new input of the model 
new_input <- layer_input(shape = c(5), name = "new_input")

# Define output layer of new model
main_output <- layer_concatenate(c(predictions1, new_input)) %>%
layer_dense(units = 64, activation = 'relu') %>%
layer_dense(units = 1, activation = 'sigmoid', name = 'main_output')

# We define a multi input and multi output model
model <- keras_model(
inputs = c(inputs, new_input),
outputs = c(predictions1, main_output)
)

 We can visualize the summary of the model using the summary() function.

It is good practice to give different layers unique names while working with complex models.