Book Image

MLOps with Red Hat OpenShift

By : Ross Brigoli, Faisal Masood
Book Image

MLOps with Red Hat OpenShift

By: Ross Brigoli, Faisal Masood

Overview of this book

MLOps with OpenShift offers practical insights for implementing MLOps workflows on the dynamic OpenShift platform. As organizations worldwide seek to harness the power of machine learning operations, this book lays the foundation for your MLOps success. Starting with an exploration of key MLOps concepts, including data preparation, model training, and deployment, you’ll prepare to unleash OpenShift capabilities, kicking off with a primer on containers, pods, operators, and more. With the groundwork in place, you’ll be guided to MLOps workflows, uncovering the applications of popular machine learning frameworks for training and testing models on the platform. As you advance through the chapters, you’ll focus on the open-source data science and machine learning platform, Red Hat OpenShift Data Science, and its partner components, such as Pachyderm and Intel OpenVino, to understand their role in building and managing data pipelines, as well as deploying and monitoring machine learning models. Armed with this comprehensive knowledge, you’ll be able to implement MLOps workflows on the OpenShift platform proficiently.
Table of Contents (13 chapters)
Free Chapter
1
Part 1: Introduction
3
Part 2: Provisioning and Configuration
6
Part 3: Operating ML Workloads

Logging inference calls

Logging is an essential part of any software architecture. We use logs to recall and investigate what happened to the system in the past. Unlike monitoring, logs are more focused on the events that occurred in the system in the past with the objective of providing the capability to look back on or perform an audit of past events.

Logging in MLOps is no different. However, there are a few aspects of logging that are more common in ML model inference than in traditional software. Here are some of the properties that we need to look out for in ML model inference logging:

  • Unstructured data: In some cases, the data you input into the inference call may not always be simple JSON-formatted text; it could be an image, video, or audio as well. This kind of unstructured data may require a different kind of storage system for logs.
  • Non-deterministic behavior: Some models, depending on the algorithm used, may not always return the same output for the same...