Book Image

Hands-On Deep Learning Architectures with Python

By : Yuxi (Hayden) Liu, Saransh Mehta
Book Image

Hands-On Deep Learning Architectures with Python

By: Yuxi (Hayden) Liu, Saransh Mehta

Overview of this book

Deep learning architectures are composed of multilevel nonlinear operations that represent high-level abstractions; this allows you to learn useful feature representations from the data. This book will help you learn and implement deep learning architectures to resolve various deep learning research problems. Hands-On Deep Learning Architectures with Python explains the essential learning algorithms used for deep and shallow architectures. Packed with practical implementations and ideas to help you build efficient artificial intelligence systems (AI), this book will help you learn how neural networks play a major role in building deep architectures. You will understand various deep learning architectures (such as AlexNet, VGG Net, GoogleNet) with easy-to-follow code and diagrams. In addition to this, the book will also guide you in building and training various deep architectures such as the Boltzmann mechanism, autoencoders, convolutional neural networks (CNNs), recurrent neural networks (RNNs), natural language processing (NLP), GAN, and more—all with practical implementations. By the end of this book, you will be able to construct deep models using popular frameworks and datasets with the required design patterns for each architecture. You will be ready to explore the potential of deep architectures in today's world.
Table of Contents (15 chapters)
Free Chapter
1
Section 1: The Elements of Deep Learning
5
Section 2: Convolutional Neural Networks
8
Section 3: Sequence Modeling
10
Section 4: Generative Adversarial Networks (GANs)
12
Section 5: The Future of Deep Learning and Advanced Artificial Intelligence

What are RBMs?

RBM is a generative stochastic neural network. By saying generative, it indicates that the network models the probability distribution over its set of inputs. And being stochastic means neurons have random behavior when activated. A general diagram of RBMs is depicted as follows:

In general, an RBM is composed of one input layer that is more commonly called the visible layer (v1,v2, v3v4 in the diagram), and one hidden layer (h1, h2, h3, h4, for example). An RBM model consists of weights W = { } that are associated with the connection between the visible layer and the hidden layer, as well as bias a =  for the visible layer, and bias b =  for the hidden layer.

There is obviously no output layer in RBMs, and hence the learning is very different from that in feedforward networks, as outlined here:

  • Instead of reducing the loss...