Book Image

Deep Learning with R Cookbook

By : Swarna Gupta, Rehan Ali Ansari, Dipayan Sarkar
Book Image

Deep Learning with R Cookbook

By: Swarna Gupta, Rehan Ali Ansari, Dipayan Sarkar

Overview of this book

Deep learning (DL) has evolved in recent years with developments such as generative adversarial networks (GANs), variational autoencoders (VAEs), and deep reinforcement learning. This book will get you up and running with R 3.5.x to help you implement DL techniques. The book starts with the various DL techniques that you can implement in your apps. A unique set of recipes will help you solve binomial and multinomial classification problems, and perform regression and hyperparameter optimization. To help you gain hands-on experience of concepts, the book features recipes for implementing convolutional neural networks (CNNs), recurrent neural networks (RNNs), and Long short-term memory (LSTMs) networks, as well as sequence-to-sequence models and reinforcement learning. You’ll then learn about high-performance computation using GPUs, along with learning about parallel computation capabilities in R. Later, you’ll explore libraries, such as MXNet, that are designed for GPU computing and state-of-the-art DL. Finally, you’ll discover how to solve different problems in NLP, object detection, and action identification, before understanding how to use pre-trained models in DL apps. By the end of this book, you’ll have comprehensive knowledge of DL and DL packages, and be able to develop effective solutions for different DL problems.
Table of Contents (11 chapters)

Cliff walking using RL

By now, you should be aware of the framework of RL. In this recipe, we will implement a real-world application of the gridworld environment in RL. This problem can be represented as a grid that's 4x12 in size. The episodes start in the lower-left state, with a goal state at the bottom right of the grid. Going left, right, up, and down are the only possible actions at any state. The states labeled C in the lower part of the grid are cliffs. Any transition into these states will incur a high negative reward of -100 and send the agent instantly back to the starting state, S. For the goal state, G, the reward is 0, while it's -1 for all the transitions except the goal state and cliff.

The following image shows the navigation matrix for the cliff walking problem:

Let's proceed and solve this navigation problem using RL.

...