Book Image

Advanced Machine Learning with R

By : Cory Lesmeister, Dr. Sunil Kumar Chinnamgari
Book Image

Advanced Machine Learning with R

By: Cory Lesmeister, Dr. Sunil Kumar Chinnamgari

Overview of this book

R is one of the most popular languages when it comes to exploring the mathematical side of machine learning and easily performing computational statistics. This Learning Path shows you how to leverage the R ecosystem to build efficient machine learning applications that carry out intelligent tasks within your organization. You’ll work through realistic projects such as building powerful machine learning models with ensembles to predict employee attrition. Next, you’ll explore different clustering techniques to segment customers using wholesale data and even apply TensorFlow and Keras-R for performing advanced computations. Each chapter will help you implement advanced machine learning algorithms using real-world examples. You’ll also be introduced to reinforcement learning along with its use cases and models. Finally, this Learning Path will provide you with a glimpse into how some of these black box models can be diagnosed and understood. By the end of this Learning Path, you’ll be equipped with the skills you need to deploy machine learning techniques in your own projects.
Table of Contents (30 chapters)
Title Page
Copyright and Credits
About Packt
Contributors
Preface
Index

Backpropagation through time


We are already aware that RNNs are cyclical graphs, unlike feedforward networks, which are acyclic directional graphs. In feedforward networks, the error derivatives are calculated from the layer above. However, in an RNN we don't have such layering to perform error derivative calculations. A simple solution to this problem is to unroll the RNN and make it similar to a feedforward network. To enable this, the hidden units from the RNN are replicated at each time step. Each time step replication forms a layer that is similar to layers in a feedforward network. Each time step t layer connects to all possible layers in the time step t+1. Therefore, we randomly initialize the weights, unroll the network, and then use backpropagation to optimize the weights in the hidden layer. The lowest layer is initialized by passing parameters. These parameters are also optimized as a part of backpropagation. The backpropagation through time algorithm involves the following steps...