Book Image

Hands-On Deep Learning Architectures with Python

By : Yuxi (Hayden) Liu, Saransh Mehta
Book Image

Hands-On Deep Learning Architectures with Python

By: Yuxi (Hayden) Liu, Saransh Mehta

Overview of this book

Deep learning architectures are composed of multilevel nonlinear operations that represent high-level abstractions; this allows you to learn useful feature representations from the data. This book will help you learn and implement deep learning architectures to resolve various deep learning research problems. Hands-On Deep Learning Architectures with Python explains the essential learning algorithms used for deep and shallow architectures. Packed with practical implementations and ideas to help you build efficient artificial intelligence systems (AI), this book will help you learn how neural networks play a major role in building deep architectures. You will understand various deep learning architectures (such as AlexNet, VGG Net, GoogleNet) with easy-to-follow code and diagrams. In addition to this, the book will also guide you in building and training various deep architectures such as the Boltzmann mechanism, autoencoders, convolutional neural networks (CNNs), recurrent neural networks (RNNs), natural language processing (NLP), GAN, and more—all with practical implementations. By the end of this book, you will be able to construct deep models using popular frameworks and datasets with the required design patterns for each architecture. You will be ready to explore the potential of deep architectures in today's world.
Table of Contents (15 chapters)
Free Chapter
1
Section 1: The Elements of Deep Learning
5
Section 2: Convolutional Neural Networks
8
Section 3: Sequence Modeling
10
Section 4: Generative Adversarial Networks (GANs)
12
Section 5: The Future of Deep Learning and Advanced Artificial Intelligence

The evolution path of autoencoders

Autoencoders were first introduced as a method for unsupervised pre-training in Modular Learning in Neural Networks (D. Ballard, AAAI proceedings, 1987). They were then used for dimensionality reduction, such as in Auto-Association by Multilayer Perceptrons and Singular Value Decomposition (H. Bourlard and Y. Kamp, biological cybernetics, 1988; 59:291-294) and non-linear feature learning, for example, Autoencoders, Minimum Description Length, and Helmholtz Free Energy (G. Hinton and R. Zemel, Advances In Neural Information Processing Systems, 1994).

Autoencoders have evolved over time and there have been several variants proposed in the past decade. In 2008, P. Vincent et al. introduced denoising autoencoders (DAEs) in Extracting and Composing Robust Features with Denoising Autoencoders (proceedings of the 25th International Conference...