Book Image

R Deep Learning Cookbook

By : PKS Prakash, Achyutuni Sri Krishna Rao
Book Image

R Deep Learning Cookbook

By: PKS Prakash, Achyutuni Sri Krishna Rao

Overview of this book

Deep Learning is the next big thing. It is a part of machine learning. It's favorable results in applications with huge and complex data is remarkable. Simultaneously, R programming language is very popular amongst the data miners and statisticians. This book will help you to get through the problems that you face during the execution of different tasks and Understand hacks in deep learning, neural networks, and advanced machine learning techniques. It will also take you through complex deep learning algorithms and various deep learning packages and libraries in R. It will be starting with different packages in Deep Learning to neural networks and structures. You will also encounter the applications in text mining and processing along with a comparison between CPU and GPU performance. By the end of the book, you will have a logical understanding of Deep learning and different deep learning packages to have the most appropriate solutions for your problems.
Table of Contents (17 chapters)
Title Page
Credits
About the Authors
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Fine-tuning the parameters of the autoencoder


The autoencoder involves a couple of parameters to tune, depending on the type of autoencoder we are working on. The major parameters in an autoencoder include the following:

  • Number of nodes in any hidden layer
  • Number of hidden layers applicable for deep autoencoders
  • Activation unit such as sigmoid, tanh, softmax, and ReLU activation functions
  • Regularization parameters or weight decay terms on hidden unit weights
  • Fraction of the signal to be corrupted in a denoising autoencoder
  • Sparsity parameters in sparse autoencoders that control the expected activation of neurons in hidden layers
  • Batch size, if using batch gradient descent learning; learning rate and momentum parameter for stochastic gradient descent
  • Maximum iterations to be used for the training
  • Weight initialization
  • Dropout regularization if dropout is used

These hyperparameters can be trained by setting the problem as a grid search problem. However, each hyperparameter combination requires training...