Book Image

R Deep Learning Cookbook

By : PKS Prakash, Achyutuni Sri Krishna Rao
Book Image

R Deep Learning Cookbook

By: PKS Prakash, Achyutuni Sri Krishna Rao

Overview of this book

Deep Learning is the next big thing. It is a part of machine learning. It's favorable results in applications with huge and complex data is remarkable. Simultaneously, R programming language is very popular amongst the data miners and statisticians. This book will help you to get through the problems that you face during the execution of different tasks and Understand hacks in deep learning, neural networks, and advanced machine learning techniques. It will also take you through complex deep learning algorithms and various deep learning packages and libraries in R. It will be starting with different packages in Deep Learning to neural networks and structures. You will also encounter the applications in text mining and processing along with a comparison between CPU and GPU performance. By the end of the book, you will have a logical understanding of Deep learning and different deep learning packages to have the most appropriate solutions for your problems.
Table of Contents (17 chapters)
Title Page
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Training a Restricted Boltzmann machine


Every training step of an RBM goes through two phases: the forward phase and the backward phase (or reconstruction phase). The reconstruction of visible units is fine tuned by making several iterations of the forward and backward phases.

Training a forward phase: In the forward phase, the input data is passed from the visible layer to the hidden layer and all the computation occurs within the nodes of the hidden layer. The computation is essentially to take a stochastic decision of each connection from the visible to the hidden layer. In the hidden layer, the input data (X) is multiplied by the weight matrix (W) and added to a hidden bias vector (hb).

The resultant vector of a size equal to the number of hidden nodes is then passed through a sigmoid function to determine each hidden node's output (or activation state). In our case, each input digit will produce a tensor vector of 900 probabilities, and as we have 55,000 input digits, we will have an...