What this book covers
Chapter 1, An Overview of the Machine Learning Life Cycle, starts with a small introduction to ML and then dives deep into an ML use case – a customer lifetime value model. The chapter runs through the different stages of ML development, and finally, it discusses the most time-consuming parts of ML and also what an ideal world and the real world look like in ML development.
Chapter 2, What Problems Do Feature Stores Solve?, introduces us to the main focus of the book, which is feature management and feature stores. It discusses the importance of features in production systems, different ways to bring features into production, and common issues with these approaches, followed by how a feature store can overcome these common issues.
Chapter 3, Feature Store Fundamentals, Terminology, and Usage, starts with an introduction to an open source feature store – Feast – followed by installation, different terminology used in the feature store world, and basic API usage. Finally, it briefly introduces different components that work together in Feast.
Chapter 4, Adding Feature Store to ML Models, will help readers install Feast on AWS as it goes through the different resource creations, such as S3 buckets, a Redshift cluster, and the Glue catalog, step by step with screenshots. Finally, it revisits the feature engineering aspect of the customer lifetime value model developed in Chapter 1, An Overview of the Machine Learning Life Cycle, and creates and ingests the curated features into Feast.
Chapter 5, Model Training and Inference, continues from where we left in Chapter 4, Adding Feature Store to ML Models, and discusses how a feature store can help data scientists and ML engineers collaborate in the development of an ML model. It discusses how to use Feast for batch model inference and also how to build a REST API for online model inference.
Chapter 6, Model to Production and Beyond, discusses the creation of an orchestration environment using Amazon Managed Workflows for Apache Airflow (MWAA), uses the feature engineering, model training, and inference code/notebooks built in the previous chapters, and deploys the batch and online model pipelines into production. Finally, it discusses aspects beyond production, such as feature monitoring, changes to feature definitions, and also building the next ML model.
Chapter 7, Feast Alternatives and ML Best Practices, introduces other feature stores, such as Tecton, Databricks Feature Store, Google Cloud's Vertex AI, Hopsworks Feature Store, and Amazon SageMaker Feature Store. It also introduces the basic usage of the latter so that users can get the gist of what is it like to use a managed feature store. Finally, it briefly discusses the ML best practices.
Chapter 8, Use Case – Customer Churn Prediction, uses a managed feature store offering of Amazon SageMaker and runs through an end-to-end use case to predict customer churn on a telecom dataset. It also covers examples of feature drift monitoring and model performance monitoring.