The previous chapter was all about starting the conversation around how we get our solutions out into the world using different deployment patterns, as well as some of the tools we can use to do this. This chapter will aim to build on that conversation by discussing the concepts and tools we can use to scale up our solutions to cope with large volumes of data or traffic.
Running some simple machine learning (ML) models on a few thousand data points on your laptop is a good exercise, especially when you’re performing the discovery and proof-of-concept steps we outlined previously at the beginning of any ML development project. This approach, however, is not appropriate if we have to run millions upon millions of data points at a relatively high frequency, or if we have to train thousands of models of a similar scale at any one time. This requires a different approach, mindset, and toolkit.
In the following pages, we will cover some details of two of the most...