Book Image

MLOps with Red Hat OpenShift

By : Ross Brigoli, Faisal Masood
Book Image

MLOps with Red Hat OpenShift

By: Ross Brigoli, Faisal Masood

Overview of this book

MLOps with OpenShift offers practical insights for implementing MLOps workflows on the dynamic OpenShift platform. As organizations worldwide seek to harness the power of machine learning operations, this book lays the foundation for your MLOps success. Starting with an exploration of key MLOps concepts, including data preparation, model training, and deployment, you’ll prepare to unleash OpenShift capabilities, kicking off with a primer on containers, pods, operators, and more. With the groundwork in place, you’ll be guided to MLOps workflows, uncovering the applications of popular machine learning frameworks for training and testing models on the platform. As you advance through the chapters, you’ll focus on the open-source data science and machine learning platform, Red Hat OpenShift Data Science, and its partner components, such as Pachyderm and Intel OpenVino, to understand their role in building and managing data pipelines, as well as deploying and monitoring machine learning models. Armed with this comprehensive knowledge, you’ll be able to implement MLOps workflows on the OpenShift platform proficiently.
Table of Contents (13 chapters)
Free Chapter
1
Part 1: Introduction
3
Part 2: Provisioning and Configuration
6
Part 3: Operating ML Workloads

Training a model for face detection

In this section, you will use a pre-trained model to build your own model for detecting a human face in a picture. This may be a simple example but we have chosen it for a reason. Our aim is to show you how different components work together in such a system while being able to test it from any laptop with a webcam. You can enhance and rebuild the model for more complicated use cases if needed.

You will use Google’s EfficientNet, a highly efficient convolutional neural network, as the base pre-trained model. With pre-trained models, you do not need a huge amount of data to train the model for your use case. This will save you both time and compute resources. This method of reusing pre-trained models is also called transfer learning.

Because this model is specifically designed for image classification, in this example, we will be using it to classify whether an image contains a human face, a human finger, or something else. As a result...