Deploying ML Models for Batch Scoring
Deploying ML models for batch scoring supports making predictions using a large volume of data. This solution supports use cases when you don’t need your model predictions immediately, but rather minutes or hours later. If you need to provide inferencing once a day, week, or month, using a large dataset, batch inferencing is ideal.
Batch inferencing allows data scientists and ML professionals to leverage cloud compute when needed, rather than paying for compute resources to be available for real-time responses. This means that compute resources can be spun up to support batch inferencing and spun down after the results have been provided to the business users. We are going to show you how to leverage the Azure Machine Learning service to deploy trained models to managed endpoints, which are HTTPS REST APIs that clients can invoke to get the score results of a trained model for batch inferencing using the studio and the Python SDK.
...