Book Image

Feature Store for Machine Learning

By : Jayanth Kumar M J
Book Image

Feature Store for Machine Learning

By: Jayanth Kumar M J

Overview of this book

Feature store is one of the storage layers in machine learning (ML) operations, where data scientists and ML engineers can store transformed and curated features for ML models. This makes them available for model training, inference (batch and online), and reuse in other ML pipelines. Knowing how to utilize feature stores to their fullest potential can save you a lot of time and effort, and this book will teach you everything you need to know to get started. Feature Store for Machine Learning is for data scientists who want to learn how to use feature stores to share and reuse each other's work and expertise. You’ll be able to implement practices that help in eliminating reprocessing of data, providing model-reproducible capabilities, and reducing duplication of work, thus improving the time to production of the ML model. While this ML book offers some theoretical groundwork for developers who are just getting to grips with feature stores, there's plenty of practical know-how for those ready to put their knowledge to work. With a hands-on approach to implementation and associated methodologies, you'll get up and running in no time. By the end of this book, you’ll have understood why feature stores are essential and how to use them in your ML projects, both on your local system and on the cloud.
Table of Contents (13 chapters)
1
Section 1 – Why Do We Need a Feature Store?
4
Section 2 – A Feature Store in Action
9
Section 3 – Alternatives, Best Practices, and a Use Case

Summary

In this chapter, our aim was to look at how model training and scoring change with the feature store. To go through the training and scoring stages of the ML life cycle, we used the resources that were created in the last chapter. In the model training phase, we looked at how data engineers and data scientists can collaborate and work towards building a better model. In model prediction, we discussed batch model scoring and how using an offline store is a cost-effective way of running a batch model. We also built a REST wrapper for the online model and added Feast code to fetch the features for prediction during runtime. At the end of the chapter, we looked at the required changes if there are updates to features during development.

In the next chapter, we will continue using the batch model and the online model that we built in this chapter, productionize them and look at what the challenges are once the models are in production.