Book Image

What's New in TensorFlow 2.0

By : Ajay Baranwal, Alizishaan Khatri, Tanish Baranwal
Book Image

What's New in TensorFlow 2.0

By: Ajay Baranwal, Alizishaan Khatri, Tanish Baranwal

Overview of this book

TensorFlow is an end-to-end machine learning platform for experts as well as beginners, and its new version, TensorFlow 2.0 (TF 2.0), improves its simplicity and ease of use. This book will help you understand and utilize the latest TensorFlow features. What's New in TensorFlow 2.0 starts by focusing on advanced concepts such as the new TensorFlow Keras APIs, eager execution, and efficient distribution strategies that help you to run your machine learning models on multiple GPUs and TPUs. The book then takes you through the process of building data ingestion and training pipelines, and it provides recommendations and best practices for feeding data to models created using the new tf.keras API. You'll explore the process of building an inference pipeline using TF Serving and other multi-platform deployments before moving on to explore the newly released AIY, which is essentially do-it-yourself AI. This book delves into the core APIs to help you build unified convolutional and recurrent layers and use TensorBoard to visualize deep learning models using what-if analysis. By the end of the book, you'll have learned about compatibility between TF 2.0 and TF 1.x and be able to migrate to TF 2.0 smoothly.
Table of Contents (13 chapters)
Title Page

TF 2.0 installation and setup

This section describes the steps required to install TF 2.0 on your system using different methods and on different system configurations. Entry-level users are recommended to start with the pip- and virtualenv-based methods. For users of the GPU version, docker is the recommended method.

Installing and using pip

For the uninitiated, pip is a popular package management system in the Python community. If this is not installed on your system, please install it before proceeding further. On many Linux installations, Python and pip are installed by default. You can check whether pip is installed by typing the following command:

python3 -m pip --help

If you see a blurb describing the different commands that pip supports, pip is installed on your system. If pip is not installed, you will see an error message, which will be something similar to No module named pip.

It usually is a good idea to isolate your development environment. This greatly simplifies dependency management and streamlines the software development process. We can achieve environment isolation by using a tool in Python called virtualenv. This step is optional but highly recommended:
>>mkdir .venv
>>virtualenv --python=python3.6 .venv/
>>source .venv.bin/activate

You can install TensorFlow using pip, as shown in the following command: 

pip3 install tensorflow==version_tag

For example, if you want to install version 2.0.0-beta1, your command should be as follows:

pip3 install tensorflow==2.0.0-beta1
A complete list of the most recent package updates is available at https://pypi.org/project/tensorflow/#history.

You can test your installation by running the following command:

python3 -c "import tensorflow as tf; a = tf.constant(1); print(tf.math.add(a, a))"

Using Docker

If you would like to isolate your TensorFlow installation from the rest of your system, you might want to consider installing it using a Docker image. This would require you to have Docker installed on your system. Installation instructions are available at https://docs.docker.com/install/.

In order to use Docker without sudo on a Linux system, please follow the post-install steps at:
https://docs.docker.com/install/linux/linux-postinstall/.

The TensorFlow team officially supports Docker images as a mode of installation. To the user, one implication of this is that updated Docker images will be made available for download at https://hub.docker.com/r/tensorflow/tensorflow/.

Download a Docker image locally using the following command:

docker pull tensorflow/tensorflow:YOUR_TAG_HERE

The previous command should've downloaded the Docker image from the centralized repository. To run the code using this image, you need to start a new container and type the following:

docker run -it --rm tensorflow/tensorflow:YOUR_TAG_HERE \
python -c "import tensorflow as tf; a = tf.constant(1); print(tf.math.add(a, a))"

A Docker-based installation is also a good option if you intend to use GPUs. Detailed instructions for this are provided in the next section. 

GPU installation

Installing the GPU version of TensorFlow is slightly different from the process for the CPU version. It can be installed using both pip and Docker. The choice of installation process boils down to the end objective. The Docker-based process is easier as it involves installing fewer additional components. It also helps avoid library conflict. This can, though, introduce an additional overhead of managing the container environment. The pip-based version involves installing more additional components but offers a greater degree of flexibility and efficiency. It enables the resultant installation to run directly on the local host without any virtualization.

To proceed, assuming you have the necessary hardware set up, you would need the following piece of software at a minimum. Detailed instructions for installation are provided in the link for NVIDIA GPU drivers (https://www.nvidia.com/Download/index.aspx?lang=en-us).

Installing using Docker

At the time of writing this book, this option is only available for NVIDIA GPUs running on Linux hosts. If you meet the platform constraints, then this is an excellent option as it significantly simplifies the process. It also minimizes the number of additional software components that you need to install by leveraging a pre-built container. To proceed, we need to install nvidia-docker. Please refer the following links for additional details:

Once you've completed the steps described in the preceding links, take the following steps:

  1. Test whether the GPU is available:
lspci | grep -i nvidia
  1. Verify your nvidia-docker installation (for v2 of nvidia-docker):
docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
  1. Download a Docker image locally:
docker pull tensorflow/tensorflow:YOUR_TAG_HERE
  1. Let's say you're trying to run the most recent version of the GPU-based image. You'd type the following:
docker pull tensorflow/tensorflow:latest-gpu
  1. Start the container and run the code:
docker run --runtime=nvidia -it --rm tensorflow/tensorflow:latest-gpu \
python -c "import tensorflow as tf; a = tf.constant(1); print(tf.math.add(a, a))"

Installing using pip

If you would like to use TensorFlow with an NVIDIA GPU, you need to install the following additional pieces of software on your system. Detailed instructions for installation are provided in the links shared:

Once all the previous components have been installed, this is a fairly straightforward process. 

Install TensorFlow using pip:

pip3 install tensorflow-gpu==version_tag

For example, if you want to install tensorflow-2.0:alpha, then you'd have to type in the following command:

pip3 install tensorflow-gpu==2.0.0-alpha0

A complete list of the most recent package updates is available at https://pypi.org/project/tensorflow/#history.

You can test your installation by running the following command:

python3 -c "import tensorflow as tf; a = tf.constant(1); print(tf.math.add(a, a))"