Deploying Detectron2 Models into Server Environments
This chapter walks you through the steps of the export process to convert Detectron2 models into deployable artifacts. Specifically, it describes the standard file formats of deep learning models such as TorchScript and the corresponding runtimes for these formats, such as PyTorch and C++. This chapter then provides the steps to convert Detectron2 models to the standard file formats and deploy them to the corresponding runtimes.
By the end of this chapter, you will understand the standard file formats and runtimes that Detectron2 supports. You can perform steps to export Detectron2 models into TorchScript format using tracing or scripting method. Additionally, you can create a C++ application to load and execute the exported models.
In this chapter, we will cover the following topics:
- Supported file formats and runtimes for PyTorch models
- Deploying custom Detectron2 models