Book Image

R Deep Learning Cookbook

By : PKS Prakash, Achyutuni Sri Krishna Rao
Book Image

R Deep Learning Cookbook

By: PKS Prakash, Achyutuni Sri Krishna Rao

Overview of this book

Deep Learning is the next big thing. It is a part of machine learning. It's favorable results in applications with huge and complex data is remarkable. Simultaneously, R programming language is very popular amongst the data miners and statisticians. This book will help you to get through the problems that you face during the execution of different tasks and Understand hacks in deep learning, neural networks, and advanced machine learning techniques. It will also take you through complex deep learning algorithms and various deep learning packages and libraries in R. It will be starting with different packages in Deep Learning to neural networks and structures. You will also encounter the applications in text mining and processing along with a comparison between CPU and GPU performance. By the end of the book, you will have a logical understanding of Deep learning and different deep learning packages to have the most appropriate solutions for your problems.
Table of Contents (17 chapters)
Title Page
About the Authors
About the Reviewer
Customer Feedback

Understanding the contrastive divergence of the reconstruction

As an initial start, the objective function can be defined as the minimization of the average negative log-likelihood of reconstructing the visible vector v where P(v) denotes the vector of generated probabilities:

Getting ready

This section provides the requirements for image reconstruction using the input probability vector.

  • mnist data is loaded in the environment
  • The images are reconstructed using the recipe Backward or reconstruction phase

How to do it...

This current recipe present the steps for, a contrastive divergence (CD) technique used to speed up the sampling process:

  1. Compute a positive weight gradient by multiplying (outer product) the input vector X with a sample of the hidden vector h0 from the given probability distribution prob_h0:
w_pos_grad = tf$matmul(tf$transpose(X), h0) 
  1. Compute a negative weight gradient by multiplying (outer product) the sample of the reconstructed input data v1 with the updated hidden activation...