Book Image

TensorFlow 1.x Deep Learning Cookbook

Book Image

TensorFlow 1.x Deep Learning Cookbook

Overview of this book

Deep neural networks (DNNs) have achieved a lot of success in the field of computer vision, speech recognition, and natural language processing. This exciting recipe-based guide will take you from the realm of DNN theory to implementing them practically to solve real-life problems in the artificial intelligence domain. In this book, you will learn how to efficiently use TensorFlow, Google’s open source framework for deep learning. You will implement different deep learning networks, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Deep Q-learning Networks (DQNs), and Generative Adversarial Networks (GANs), with easy-to-follow standalone recipes. You will learn how to use TensorFlow with Keras as the backend. You will learn how different DNNs perform on some popularly used datasets, such as MNIST, CIFAR-10, and Youtube8m. You will not only learn about the different mobile and embedded platforms supported by TensorFlow, but also how to set up cloud platforms for deep learning applications. You will also get a sneak peek at TPU architecture and how it will affect the future of DNNs. By using crisp, no-nonsense recipes, you will become an expert in implementing deep learning techniques in growing real-world applications and research areas such as reinforcement learning, GANs, and autoencoders.
Table of Contents (15 chapters)
14
TensorFlow Processing Units

Introduction

Each TensorFlow computation is described in terms of a graph. This allows a natural degree of flexibility in the structure and the placement of operations that can be split across distributed nodes of computation. The graph can be split into multiple subgraphs that are assigned to different nodes in a cluster of servers.

I strongly suggest the reader have a look to the paper Large Scale Distributed Deep Networks Jeffrey Dean, Greg S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marc'Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. NIPS, 2012, https://research.google.com/archive/large_deep_networks_nips2012.html

One key result of the paper is to prove that it is possible to run distributed stochastic gradient descent (SDG) where multiple nodes are working in parallel on data-shards and update independently and...