Book Image

The Definitive Guide to Google Vertex AI

By : Jasmeet Bhatia, Kartik Chaudhary
4 (1)
Book Image

The Definitive Guide to Google Vertex AI

4 (1)
By: Jasmeet Bhatia, Kartik Chaudhary

Overview of this book

While AI has become an integral part of every organization today, the development of large-scale ML solutions and management of complex ML workflows in production continue to pose challenges for many. Google’s unified data and AI platform, Vertex AI, directly addresses these challenges with its array of MLOPs tools designed for overall workflow management. This book is a comprehensive guide that lets you explore Google Vertex AI’s easy-to-advanced level features for end-to-end ML solution development. Throughout this book, you’ll discover how Vertex AI empowers you by providing essential tools for critical tasks, including data management, model building, large-scale experimentations, metadata logging, model deployments, and monitoring. You’ll learn how to harness the full potential of Vertex AI for developing and deploying no-code, low-code, or fully customized ML solutions. This book takes a hands-on approach to developing u deploying some real-world ML solutions on Google Cloud, leveraging key technologies such as Vision, NLP, generative AI, and recommendation systems. Additionally, this book covers pre-built and turnkey solution offerings as well as guidance on seamlessly integrating them into your ML workflows. By the end of this book, you’ll have the confidence to develop and deploy large-scale production-grade ML solutions using the MLOps tooling and best practices from Google.
Table of Contents (24 chapters)
1
Part 1:The Importance of MLOps in a Real-World ML Deployment
4
Part 2: Machine Learning Tools for Custom Models on Google Cloud
14
Part 3: Prebuilt/Turnkey ML Solutions Available in GCP
18
Part 4: Building Real-World ML Solutions with Google Cloud

Deploying a vision model to a Vertex AI endpoint

In the previous section, we completed our experiment of training a TF-based vision model to identify detects from product images. We now have a trained model that can identify defected or broken bangle images. To make this model usable in downstream applications, we need to deploy it to an endpoint so that we can query that endpoint, getting outputs for new input images on demand. There are certain things that are important to consider while deploying a model, such as expected traffic, expected latency, and expected cost. Based on these factors, we can choose the best infrastructure to deploy our models. If there are strict low-latency requirements, we can deploy our model to machines with accelerators (such as Graphical Processing Units (GPUs) or Tensor Processing Units (TPUs)). Conversely, we don’t have the necessity of online or on-demand predictions, so we don’t need to deploy our model to an endpoint. Offline batch...