Book Image

Deep Learning with Hadoop

By : Dipayan Dev
Book Image

Deep Learning with Hadoop

By: Dipayan Dev

Overview of this book

This book will teach you how to deploy large-scale dataset in deep neural networks with Hadoop for optimal performance. Starting with understanding what deep learning is, and what the various models associated with deep neural networks are, this book will then show you how to set up the Hadoop environment for deep learning. In this book, you will also learn how to overcome the challenges that you face while implementing distributed deep learning with large-scale unstructured datasets. The book will also show you how you can implement and parallelize the widely used deep learning models such as Deep Belief Networks, Convolutional Neural Networks, Recurrent Neural Networks, Restricted Boltzmann machines and autoencoder using the popular deep learning library Deeplearning4j. Get in-depth mathematical explanations and visual representations to help you understand the design and implementations of Recurrent Neural network and Denoising Autoencoders with Deeplearning4j. To give you a more practical perspective, the book will also teach you the implementation of large-scale video processing, image processing and natural language processing on Hadoop. By the end of this book, you will know how to deploy various deep neural networks in distributed systems using Hadoop.
Table of Contents (16 chapters)
Deep Learning with Hadoop
Credits
About the Author
About the Reviewers
www.PacktPub.com
Customer Feedback
Dedication
Preface
References

Distributed deep RNNs


As you now have an understanding of a RNN, its applications, features, and architecture, we can now move on to discuss how to use this network as distributed architecture. Distributing RNN is not an easy task, and hence, only a few researchers have worked on this in the past. Although the primary concept of data parallelism is similar for all the networks, distributing RNNs among multiple servers requires some brainstorming and a bit tedious work too.

Recently, one work from Google [119] has tried to distribute recurrent networks in many servers in a speech recognition task. In this section, we will discuss this work on distributed RNNs with the help of Hadoop.

Asynchronous stochastic gradient descent (ASGD) can be used for large-scale training of a RNN. ASGD has particularly shown success in sequence discriminative training of the deep neural networks.

A two-layer deep Long short-term memory RNN is used to build the Long short-term memory network. Each Long short-term...