Summary
In this chapter, we aimed at using everything we built in the previous chapters and productionizing the ML models for batch and online use cases. To do that, we created an Amazon MWAA environment and used it for the orchestration of the batch model pipeline. For the online model, we used Airflow for the orchestration of the feature engineering pipeline and the SageMaker inference components to deploy a Docker online model as a SageMaker endpoint. We looked at how a feature store facilitates the postproduction aspects of ML, such as feature drift monitoring, model reproducibility, debugging prediction issues, and how to change a feature set when the model is in production. We also looked at how data scientists get a headstart on the new model with the use of a feature store. So far, we have used Feast in all our exercises; in the next chapter, we will look at a few of the feature stores that are available on the market and how they differ from Feast, alongside some examples...