Book Image

Hands-On Computer Vision with Detectron2

By : Van Vung Pham
5 (4)
Book Image

Hands-On Computer Vision with Detectron2

5 (4)
By: Van Vung Pham

Overview of this book

Computer vision is a crucial component of many modern businesses, including automobiles, robotics, and manufacturing, and its market is growing rapidly. This book helps you explore Detectron2, Facebook's next-gen library providing cutting-edge detection and segmentation algorithms. It’s used in research and practical projects at Facebook to support computer vision tasks, and its models can be exported to TorchScript or ONNX for deployment. The book provides you with step-by-step guidance on using existing models in Detectron2 for computer vision tasks (object detection, instance segmentation, key-point detection, semantic detection, and panoptic segmentation). You’ll get to grips with the theories and visualizations of Detectron2’s architecture and learn how each module in Detectron2 works. As you advance, you’ll build your practical skills by working on two real-life projects (preparing data, training models, fine-tuning models, and deployments) for object detection and instance segmentation tasks using Detectron2. Finally, you’ll deploy Detectron2 models into production and develop Detectron2 applications for mobile devices. By the end of this deep learning book, you’ll have gained sound theoretical knowledge and useful hands-on skills to help you solve advanced computer vision tasks using Detectron2.
Table of Contents (20 chapters)
1
Part 1: Introduction to Detectron2
4
Part 2: Developing Custom Object Detection Models
12
Part 3: Developing a Custom Detectron2 Model for Instance Segmentation Tasks
15
Part 4: Deploying Detectron2 Models into Production

Region of Interest Heads

The components of the Region of Interest Heads perform the second stage in the object detection architecture that Detectron2 implements. Figure 4.10 illustrates the steps inside this stage.

Figure 4.11: The Region of Interest Heads

Figure 4.11: The Region of Interest Heads

Specifically, this stage takes the features extracted from the backbone network and the ground-truth bounding boxes (if training) and performs the following steps:

  1. Label and sample proposals (if training).
  2. Extract box features.
  3. Perform predictions.
  4. Calculate losses (if training).
  5. Perform inferences (if inferencing).

If it is training, out of the 2,000 proposals (POST_NMS_TOPK_TRAIN), there can be many negative proposals compared to those positive ones (especially at the early stage of the training when the RPN is not accurate yet). Similar to the RPN stage, this step also labels (based on ground truth) and samples another mini-batch with a fraction of positive proposals...