In this chapter, we studied how to deploy our machine learning models with traditional server-based deployment strategies using R's plumber, and enhanced approaches using plumber for R with Docker containers. We then studied how serverless applications can be built using cloud services and how we can easily scale applications as needed with minimal code.
We explored various web services, such as Amazon Lambda, Amazon SageMaker, and Amazon API Gateway, and studied how services can be orchestrated to deploy our machine learning model as a serverless application.
In the next chapter, we will work on a capstone project by taking up one of the latest research papers based on a real-world problem and reproducing the result.