Book Image

MLOps with Red Hat OpenShift

By : Ross Brigoli, Faisal Masood
Book Image

MLOps with Red Hat OpenShift

By: Ross Brigoli, Faisal Masood

Overview of this book

MLOps with OpenShift offers practical insights for implementing MLOps workflows on the dynamic OpenShift platform. As organizations worldwide seek to harness the power of machine learning operations, this book lays the foundation for your MLOps success. Starting with an exploration of key MLOps concepts, including data preparation, model training, and deployment, you’ll prepare to unleash OpenShift capabilities, kicking off with a primer on containers, pods, operators, and more. With the groundwork in place, you’ll be guided to MLOps workflows, uncovering the applications of popular machine learning frameworks for training and testing models on the platform. As you advance through the chapters, you’ll focus on the open-source data science and machine learning platform, Red Hat OpenShift Data Science, and its partner components, such as Pachyderm and Intel OpenVino, to understand their role in building and managing data pipelines, as well as deploying and monitoring machine learning models. Armed with this comprehensive knowledge, you’ll be able to implement MLOps workflows on the OpenShift platform proficiently.
Table of Contents (13 chapters)
Free Chapter
1
Part 1: Introduction
3
Part 2: Provisioning and Configuration
6
Part 3: Operating ML Workloads

Using GPU acceleration for model training

In the previous section, you customized software components that your team needs to build models. In this section, you will see how RHODS makes it easy for you to use specific hardware for your workbench.

Imagine that you are working on a simple supervised learning model, and you do not need any specific hardware, such as a GPU, to complete your work. If you work on laptops, then the hardware is fixed and shipped with your laptop. You cannot change it dynamically and it would be expensive for organizations to give every data scientist specialized GPU hardware. It’s worse if there is a new model of the GPU and you already bought an older version for your team. RHODS enables you to provision hardware on-demand for your team, so if one member needs a GPU, they can just select it from the UI and start using it. Then, when their work is done, the GPU is released back to the hardware pool. This dynamic nature not only reduces costs but...