Book Image

What's New in TensorFlow 2.0

By : Ajay Baranwal, Alizishaan Khatri, Tanish Baranwal
Book Image

What's New in TensorFlow 2.0

By: Ajay Baranwal, Alizishaan Khatri, Tanish Baranwal

Overview of this book

TensorFlow is an end-to-end machine learning platform for experts as well as beginners, and its new version, TensorFlow 2.0 (TF 2.0), improves its simplicity and ease of use. This book will help you understand and utilize the latest TensorFlow features. What's New in TensorFlow 2.0 starts by focusing on advanced concepts such as the new TensorFlow Keras APIs, eager execution, and efficient distribution strategies that help you to run your machine learning models on multiple GPUs and TPUs. The book then takes you through the process of building data ingestion and training pipelines, and it provides recommendations and best practices for feeding data to models created using the new tf.keras API. You'll explore the process of building an inference pipeline using TF Serving and other multi-platform deployments before moving on to explore the newly released AIY, which is essentially do-it-yourself AI. This book delves into the core APIs to help you build unified convolutional and recurrent layers and use TensorBoard to visualize deep learning models using what-if analysis. By the end of the book, you'll have learned about compatibility between TF 2.0 and TF 1.x and be able to migrate to TF 2.0 smoothly.
Table of Contents (13 chapters)
Title Page

Using TF 2.0

TF 2.0 can be used in two main ways—using low-level APIs and using high-level APIs. To use the low-level APIs in TF 2.0, APIs such as tf.GradientTape and tf.function are implemented.

The code flow for writing low-level code is to define a forward pass inside of a function that takes the input data as an argument. This function is then annotated with the tf.function decorator in order to run it in graph mode along with all of its benefits. To record and get the gradients of the forward pass, both the decorator function and the loss function are run inside the tf.GradientTape context manager, from which gradients can be calculated and applied on the model variables.

Training code can also be written using the low-level APIs for tf.keras models by using tf.GradientTape. This is for when more control and customizability is needed over the default tf.keras.Model.fit method. Training methods and pipelines are explained in depth in Chapter 4, Model Training and Use of TensorBoard.

The simple comparison between TF 2.0 and TF 1.x is that the tensor that is run using sess.run in TF 1.x is now a function, and the feed dict and placeholders are the arguments of that function. This is the philosophical change between TF 2.0 and TF 1.x; there is a shift toward complete object-oriented code where all APIs and modules are callable objects.

Using the high-level APIs in TF 2.0 is easier, where tf.keras is the default high-level API used. tf.keras has three different methods of model creation. These methods are as follows:

  • The Sequential API: This is another change brought on in TF 2.0. The previous high-level API for model creation in TF 1.x was the tf.layers module. This module has been converted to tf.keras.layers, where nearly all the methods from the tf.layers module are replicated in tf.keras.layers. This makes it easy to convert from tf.layers to tf.keras.layers, as the code is nearly completely identical.

Using the Sequential API to create a model is done by creating a linear model with the symbolic tf.keras layer classes. This style is used for completely linear models and is the easiest style to use.

The following code block is an example of a Sequential API model:

model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.04),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10, activation='softmax')
])

train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))

train_out = model(train_data, training=True)

test_out = model(test_data, training=False)
  • The functional API: This API has more flexibility than the Sequential API in the sense that it's based on calling the layer classes on the output tensor of the layer preceding it. This means that non-linear models and architectures can be implemented, such as the Inception and ResNet architectures.

The following code block is an example of a model created with the functional API:

encoder_input = keras.Input(shape=(28, 28, 1), name='img')
x = layers.Conv2D(16, 3, activation='relu')(encoder_input)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation='relu')(x)
x = layers.Conv2D(16, 3, activation='relu')(x)
encoder_output = layers.GlobalMaxPooling2D()(x)

encoder = keras.Model(encoder_input, encoder_output, name='encoder')
  • The model subclassing technique: This is very similar to the low-level approach in the sense that it is used to create custom models and layers that implement technologies and techniques not included in TensorFlow. The model subclassing technique involves creating a class that inherits from the tf.keras.Model base class and has a call method defined that takes an input argument and a training argument, and then computes and returns the result of a forward pass through the model.

The following code block is an example of a model created with model subclassing:

class ResNet(tf.keras.Model):

def __init__(self):
super(ResNet, self).__init__()
self.block_1 = ResNetBlock()
self.block_2 = ResNetBlock()
self.global_pool = layers.GlobalAveragePooling2D()
self.classifier = Dense(num_classes)

def call(self, inputs):
x = self.block_1(inputs)
x = self.block_2(x)
x = self.global_pool(x)
return self.classifier(x)

resnet = ResNet()
dataset = ...
resnet.fit(dataset, epochs=10)