Book Image

Feature Store for Machine Learning

By : Jayanth Kumar M J
Book Image

Feature Store for Machine Learning

By: Jayanth Kumar M J

Overview of this book

Feature store is one of the storage layers in machine learning (ML) operations, where data scientists and ML engineers can store transformed and curated features for ML models. This makes them available for model training, inference (batch and online), and reuse in other ML pipelines. Knowing how to utilize feature stores to their fullest potential can save you a lot of time and effort, and this book will teach you everything you need to know to get started. Feature Store for Machine Learning is for data scientists who want to learn how to use feature stores to share and reuse each other's work and expertise. You’ll be able to implement practices that help in eliminating reprocessing of data, providing model-reproducible capabilities, and reducing duplication of work, thus improving the time to production of the ML model. While this ML book offers some theoretical groundwork for developers who are just getting to grips with feature stores, there's plenty of practical know-how for those ready to put their knowledge to work. With a hands-on approach to implementation and associated methodologies, you'll get up and running in no time. By the end of this book, you’ll have understood why feature stores are essential and how to use them in your ML projects, both on your local system and on the cloud.
Table of Contents (13 chapters)
1
Section 1 – Why Do We Need a Feature Store?
4
Section 2 – A Feature Store in Action
9
Section 3 – Alternatives, Best Practices, and a Use Case

Online model inference with Feast

In the last section, we discussed how to use Feast in batch model inference. Now, it's time to look at the online model use case. One of the requirements of online model inference is that it should return results in low latency and also be invoked from anywhere. One of the common paradigms is to expose the model as a REST API endpoint. In the Model packaging section, we logged the model using the joblib library. That model needs to be wrapped with the RESTful framework to be deployable as a REST endpoint. Not only that but the features also need to be fetched in real time when the inference endpoint is invoked. Unlike in Chapter 1, An Overview of the Machine Learning Life Cycle, where we didn't have the infrastructure for serving features in real time, here, we already have that in place thanks to Feast. However, we need to run the command to sync offline features to the online store using the Feast library. Let's do that first. Later...