This section describes the steps required to install TF 2.0 on your system using different methods and on different system configurations. Entry-level users are recommended to start with the pip- and virtualenv-based methods. For users of the GPU version, docker is the recommended method.
TF 2.0 installation and setup
Installing and using pip
For the uninitiated, pip is a popular package management system in the Python community. If this is not installed on your system, please install it before proceeding further. On many Linux installations, Python and pip are installed by default. You can check whether pip is installed by typing the following command:
python3 -m pip --help
If you see a blurb describing the different commands that pip supports, pip is installed on your system. If pip is not installed, you will see an error message, which will be something similar to No module named pip.
>>mkdir .venv
>>virtualenv --python=python3.6 .venv/
>>source .venv.bin/activate
You can install TensorFlow using pip, as shown in the following command:
pip3 install tensorflow==version_tag
For example, if you want to install version 2.0.0-beta1, your command should be as follows:
pip3 install tensorflow==2.0.0-beta1
You can test your installation by running the following command:
python3 -c "import tensorflow as tf; a = tf.constant(1); print(tf.math.add(a, a))"
Using Docker
If you would like to isolate your TensorFlow installation from the rest of your system, you might want to consider installing it using a Docker image. This would require you to have Docker installed on your system. Installation instructions are available at https://docs.docker.com/install/.
https://docs.docker.com/install/linux/linux-postinstall/.
The TensorFlow team officially supports Docker images as a mode of installation. To the user, one implication of this is that updated Docker images will be made available for download at https://hub.docker.com/r/tensorflow/tensorflow/.
Download a Docker image locally using the following command:
docker pull tensorflow/tensorflow:YOUR_TAG_HERE
The previous command should've downloaded the Docker image from the centralized repository. To run the code using this image, you need to start a new container and type the following:
docker run -it --rm tensorflow/tensorflow:YOUR_TAG_HERE \
python -c "import tensorflow as tf; a = tf.constant(1); print(tf.math.add(a, a))"
A Docker-based installation is also a good option if you intend to use GPUs. Detailed instructions for this are provided in the next section.
GPU installation
Installing the GPU version of TensorFlow is slightly different from the process for the CPU version. It can be installed using both pip and Docker. The choice of installation process boils down to the end objective. The Docker-based process is easier as it involves installing fewer additional components. It also helps avoid library conflict. This can, though, introduce an additional overhead of managing the container environment. The pip-based version involves installing more additional components but offers a greater degree of flexibility and efficiency. It enables the resultant installation to run directly on the local host without any virtualization.
To proceed, assuming you have the necessary hardware set up, you would need the following piece of software at a minimum. Detailed instructions for installation are provided in the link for NVIDIA GPU drivers (https://www.nvidia.com/Download/index.aspx?lang=en-us).
Installing using Docker
At the time of writing this book, this option is only available for NVIDIA GPUs running on Linux hosts. If you meet the platform constraints, then this is an excellent option as it significantly simplifies the process. It also minimizes the number of additional software components that you need to install by leveraging a pre-built container. To proceed, we need to install nvidia-docker. Please refer the following links for additional details:
- Installation: https://github.com/NVIDIA/nvidia-docker
- FAQs: https://github.com/NVIDIA/nvidia-docker/wiki/Frequently-Asked-Questions#platform-support
Once you've completed the steps described in the preceding links, take the following steps:
- Test whether the GPU is available:
lspci | grep -i nvidia
- Verify your nvidia-docker installation (for v2 of nvidia-docker):
docker run --runtime=nvidia --rm nvidia/cuda nvidia-smi
- Download a Docker image locally:
docker pull tensorflow/tensorflow:YOUR_TAG_HERE
- Let's say you're trying to run the most recent version of the GPU-based image. You'd type the following:
docker pull tensorflow/tensorflow:latest-gpu
- Start the container and run the code:
docker run --runtime=nvidia -it --rm tensorflow/tensorflow:latest-gpu \
python -c "import tensorflow as tf; a = tf.constant(1); print(tf.math.add(a, a))"
Installing using pip
If you would like to use TensorFlow with an NVIDIA GPU, you need to install the following additional pieces of software on your system. Detailed instructions for installation are provided in the links shared:
- CUDA Toolkit: TensorFlow supports CUDA 10.0 (https://developer.nvidia.com/cuda-toolkit-archive)
- CUPTI ships with the CUDA Toolkit (https://docs.nvidia.com/cuda/cupti/)
- The cuDNN SDK (version 7.4.1 or above) (https://developer.nvidia.com/cudnn)
- (Optional) TensorRT 5.0 to improve latency and throughput for inference on some models (https://docs.nvidia.com/deeplearning/sdk/tensorrt-install-guide/index.html)
Once all the previous components have been installed, this is a fairly straightforward process.
Install TensorFlow using pip:
pip3 install tensorflow-gpu==version_tag
For example, if you want to install tensorflow-2.0:alpha, then you'd have to type in the following command:
pip3 install tensorflow-gpu==2.0.0-alpha0
A complete list of the most recent package updates is available at https://pypi.org/project/tensorflow/#history.
You can test your installation by running the following command:
python3 -c "import tensorflow as tf; a = tf.constant(1); print(tf.math.add(a, a))"