Book Image

TensorFlow 1.x Deep Learning Cookbook

Book Image

TensorFlow 1.x Deep Learning Cookbook

Overview of this book

Deep neural networks (DNNs) have achieved a lot of success in the field of computer vision, speech recognition, and natural language processing. This exciting recipe-based guide will take you from the realm of DNN theory to implementing them practically to solve real-life problems in the artificial intelligence domain. In this book, you will learn how to efficiently use TensorFlow, Google’s open source framework for deep learning. You will implement different deep learning networks, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Deep Q-learning Networks (DQNs), and Generative Adversarial Networks (GANs), with easy-to-follow standalone recipes. You will learn how to use TensorFlow with Keras as the backend. You will learn how different DNNs perform on some popularly used datasets, such as MNIST, CIFAR-10, and Youtube8m. You will not only learn about the different mobile and embedded platforms supported by TensorFlow, but also how to set up cloud platforms for deep learning applications. You will also get a sneak peek at TPU architecture and how it will affect the future of DNNs. By using crisp, no-nonsense recipes, you will become an expert in implementing deep learning techniques in growing real-world applications and research areas such as reinforcement learning, GANs, and autoencoders.
Table of Contents (15 chapters)
14
TensorFlow Processing Units

Introduction

Autoencoders, also known as Diabolo networks or autoassociators, was initially proposed in the 1980s by Hinton and the PDP group [1]. They are feedforward networks, without any feedback, and they learn via unsupervised learning. Like multiplayer perceptrons of Chapter 3, Neural Networks-Perceptrons, they use the backpropagation algorithm to learn, but with a major difference--the target is the same as the input.

We can think of an autoencoder as consisting of two cascaded networks--the first network is an encoder, it takes the input x, and encodes it using a transformation h to encoded signal y:

y = h(x)

The second network uses the encoded signal y as its input and performs another transformation f to get a reconstructed signal r:

r = f(y) = f(h(x))

We define error e as the difference between the original input x and the reconstructed signal r, e = x - r. The network...