Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying R Deep Learning Cookbook
  • Table Of Contents Toc
  • Feedback & Rating feedback
R Deep Learning Cookbook

R Deep Learning Cookbook

By : PKS Prakash, Sri Krishna Rao
1 (1)
close
close
R Deep Learning Cookbook

R Deep Learning Cookbook

1 (1)
By: PKS Prakash, Sri Krishna Rao

Overview of this book

Deep Learning is the next big thing. It is a part of machine learning. It's favorable results in applications with huge and complex data is remarkable. Simultaneously, R programming language is very popular amongst the data miners and statisticians. This book will help you to get through the problems that you face during the execution of different tasks and Understand hacks in deep learning, neural networks, and advanced machine learning techniques. It will also take you through complex deep learning algorithms and various deep learning packages and libraries in R. It will be starting with different packages in Deep Learning to neural networks and structures. You will also encounter the applications in text mining and processing along with a comparison between CPU and GPU performance. By the end of the book, you will have a logical understanding of Deep learning and different deep learning packages to have the most appropriate solutions for your problems.
Table of Contents (11 chapters)
close
close

Setting up stacked autoencoders


The stacked autoencoder is an approach to train deep networks consisting of multiple layers trained using the greedy approach. An example of a stacked autoencoder is shown in the following diagram:

An example of a stacked autoencoder

Getting ready

The preceding diagram demonstrates a stacked autoencoder with two layers. A stacked autoencoder can have n layers, where each layer is trained using one layer at a time. For example, the previous layer will be trained as follows:

Training of a stacked autoencoder

The initial pre-training of layer 1 is obtained by training it over the actual input xi . The first step is to optimize the We(1) layer of the encoder with respect to output X. The second step in the preceding example is to optimize the weights We(2) in the second layer, using We(1) as input and output. Once all the layers of We(i) where i=1, 2, ...,n is number of layers are pretrained, model fine-tuning is performed by connecting all the layers together, as...

Visually different images
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
R Deep Learning Cookbook
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon