Book Image

Learn TensorFlow Enterprise

By : KC Tung
Book Image

Learn TensorFlow Enterprise

By: KC Tung

Overview of this book

TensorFlow as a machine learning (ML) library has matured into a production-ready ecosystem. This beginner’s book uses practical examples to enable you to build and deploy TensorFlow models using optimal settings that ensure long-term support without having to worry about library deprecation or being left behind when it comes to bug fixes or workarounds. The book begins by showing you how to refine your TensorFlow project and set it up for enterprise-level deployment. You’ll then learn how to choose a future-proof version of TensorFlow. As you advance, you’ll find out how to build and deploy models in a robust and stable environment by following recommended practices made available in TensorFlow Enterprise. This book also teaches you how to manage your services better and enhance the performance and reliability of your artificial intelligence (AI) applications. You’ll discover how to use various enterprise-ready services to accelerate your ML and AI workflows on Google Cloud Platform (GCP). Finally, you’ll scale your ML models and handle heavy workloads across CPUs, GPUs, and Cloud TPUs. By the end of this TensorFlow book, you’ll have learned the patterns needed for TensorFlow Enterprise model development, data pipelines, training, and deployment.
Table of Contents (15 chapters)
1
Section 1 – TensorFlow Enterprise Services and Features
4
Section 2 – Data Preprocessing and Modeling
7
Section 3 – Scaling and Tuning ML Works
10
Section 4 – Model Optimization and Deployment

Converting a full model to a reduced float16 model

In this section, we are going to load the model we just trained and quantize it into a reduced float16 model. For the convenience of step-by-step explanations and your learning experience, it is recommended that you use JupyterLab or Jupyter Notebook to follow along with the explanation here:

  1. Let's start by loading the trained model:
    import tensorflow as tf
    import pathlib
    import os
    import numpy as np
    from matplotlib.pyplot import imshow
    import matplotlib.pyplot as plt
    root_dir = '../train_base_model'
    model_dir = ' trained_resnet_vector-unquantized/save_model'
    saved_model_dir = os.path.join(root_dir, model_dir)
    trained_model = tf.saved_model.load(saved_model_dir)

    The tf.saved_model.load API helps us to load the saved model we built and trained.

  2. Then we will create a converter object to refer to the savedModel directory with the following line of code:
    converter = tf.lite.TFLiteConverter.from_saved_model...