Book Image

What's New in TensorFlow 2.0

By : Ajay Baranwal, Alizishaan Khatri, Tanish Baranwal
Book Image

What's New in TensorFlow 2.0

By: Ajay Baranwal, Alizishaan Khatri, Tanish Baranwal

Overview of this book

TensorFlow is an end-to-end machine learning platform for experts as well as beginners, and its new version, TensorFlow 2.0 (TF 2.0), improves its simplicity and ease of use. This book will help you understand and utilize the latest TensorFlow features. What's New in TensorFlow 2.0 starts by focusing on advanced concepts such as the new TensorFlow Keras APIs, eager execution, and efficient distribution strategies that help you to run your machine learning models on multiple GPUs and TPUs. The book then takes you through the process of building data ingestion and training pipelines, and it provides recommendations and best practices for feeding data to models created using the new tf.keras API. You'll explore the process of building an inference pipeline using TF Serving and other multi-platform deployments before moving on to explore the newly released AIY, which is essentially do-it-yourself AI. This book delves into the core APIs to help you build unified convolutional and recurrent layers and use TensorBoard to visualize deep learning models using what-if analysis. By the end of the book, you'll have learned about compatibility between TF 2.0 and TF 1.x and be able to migrate to TF 2.0 smoothly.
Table of Contents (13 chapters)
Title Page

Summary

TFLite is a feature of TF2.0 that takes a TF model and compresses and optimizes it to run on an embedded Linux device, or a low-power and low-binary device. Converting a TF model into a TFLite model can be done in three ways: from a saved model, a tf.keras model, or a concrete function. Once the model has been converted, a .tflite file will be created, which can then be transferred to the desired device and run using the TFLite interpreter. This model is optimized to use hardware acceleration and is stored in FlatBuffer format for quick read speeds. Other optimization techniques can be applied to the model, such as quantization, which converts the 32-bit floating point numbers into 8-bit fixed-point numbers, with a tradeoff of a minimal amount of accuracy. Some devices that TFLite can be run on are the Edge TPU, the NVIDIA Jetson Nano, and the Raspberry Pi. Google...