Book Image

Deep Learning with TensorFlow and Keras – 3rd edition - Third Edition

By : Amita Kapoor, Antonio Gulli, Sujit Pal
5 (2)
Book Image

Deep Learning with TensorFlow and Keras – 3rd edition - Third Edition

5 (2)
By: Amita Kapoor, Antonio Gulli, Sujit Pal

Overview of this book

Deep Learning with TensorFlow and Keras teaches you neural networks and deep learning techniques using TensorFlow (TF) and Keras. You'll learn how to write deep learning applications in the most powerful, popular, and scalable machine learning stack available. TensorFlow 2.x focuses on simplicity and ease of use, with updates like eager execution, intuitive higher-level APIs based on Keras, and flexible model building on any platform. This book uses the latest TF 2.0 features and libraries to present an overview of supervised and unsupervised machine learning models and provides a comprehensive analysis of deep learning and reinforcement learning models using practical examples for the cloud, mobile, and large production environments. This book also shows you how to create neural networks with TensorFlow, runs through popular algorithms (regression, convolutional neural networks (CNNs), transformers, generative adversarial networks (GANs), recurrent neural networks (RNNs), natural language processing (NLP), and graph neural networks (GNNs)), covers working example apps, and then dives into TF in production, TF mobile, and TensorFlow with AutoML.
Table of Contents (23 chapters)
21
Other Books You May Enjoy
22
Index

How to use TPUs with Colab

In this section, we show how to use TPUs with Colab. Just point your browser to https://colab.research.google.com/ and change the runtime from the Runtime menu as shown in Figure 15.12. First, you’ll need to enable TPUs for the notebook, then navigate to EditNotebook settings and select TPU from the Hardware accelerator drop-down box:

Graphical user interface, text, application, chat or text message  Description automatically generated

Figure 15.12: Setting TPU as the hardware accelerator

Checking whether TPUs are available

First of all, let’s check if there is a TPU available, by using this simple code fragment that returns the IP address assigned to the TPU. Communication between the CPU and TPU happens via gRPC (gRPC Remote Procedure Call), which is a modern, open-source, high-performance Remote Procedure Call (RPC) framework that can run in any environment:

%tensorflow_version 2.x
import tensorflow as tf
print("Tensorflow version " + tf.__version__)
try:
  tpu = tf.distribute.cluster_resolver.TPUClusterResolver...