Book Image

Feature Store for Machine Learning

By : Jayanth Kumar M J
Book Image

Feature Store for Machine Learning

By: Jayanth Kumar M J

Overview of this book

Feature store is one of the storage layers in machine learning (ML) operations, where data scientists and ML engineers can store transformed and curated features for ML models. This makes them available for model training, inference (batch and online), and reuse in other ML pipelines. Knowing how to utilize feature stores to their fullest potential can save you a lot of time and effort, and this book will teach you everything you need to know to get started. Feature Store for Machine Learning is for data scientists who want to learn how to use feature stores to share and reuse each other's work and expertise. You’ll be able to implement practices that help in eliminating reprocessing of data, providing model-reproducible capabilities, and reducing duplication of work, thus improving the time to production of the ML model. While this ML book offers some theoretical groundwork for developers who are just getting to grips with feature stores, there's plenty of practical know-how for those ready to put their knowledge to work. With a hands-on approach to implementation and associated methodologies, you'll get up and running in no time. By the end of this book, you’ll have understood why feature stores are essential and how to use them in your ML projects, both on your local system and on the cloud.
Table of Contents (13 chapters)
1
Section 1 – Why Do We Need a Feature Store?
4
Section 2 – A Feature Store in Action
9
Section 3 – Alternatives, Best Practices, and a Use Case

Data processing and feature engineering

In this section, let's use the telecom customer churn dataset and generate the features that can be used for training the model. Let's create a notebook, call it feature-engineering.ipynb, and install the required dependencies:

!pip install pandas sklearn python-slugify s3fs sagemaker

Once the installation of the libraries is complete, read the data. For this exercise, I have downloaded the data from Kaggle and saved it in a location where it is accessible from the notebook.

The following command reads the data from S3:

import os
import numpy as np
import pandas as pd
from slugify import slugify
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler
""" If you are executing the notebook outside AWS(Local jupyter lab, google collab or kaggle etc.), please uncomment the following 3 lines of code and set the AWS credentials """
#os.environ...