Book Image

Hands-On Computer Vision with Detectron2

By : Van Vung Pham
Book Image

Hands-On Computer Vision with Detectron2

By: Van Vung Pham

Overview of this book

Computer vision is a crucial component of many modern businesses, including automobiles, robotics, and manufacturing, and its market is growing rapidly. This book helps you explore Detectron2, Facebook's next-gen library providing cutting-edge detection and segmentation algorithms. It’s used in research and practical projects at Facebook to support computer vision tasks, and its models can be exported to TorchScript or ONNX for deployment. The book provides you with step-by-step guidance on using existing models in Detectron2 for computer vision tasks (object detection, instance segmentation, key-point detection, semantic detection, and panoptic segmentation). You’ll get to grips with the theories and visualizations of Detectron2’s architecture and learn how each module in Detectron2 works. As you advance, you’ll build your practical skills by working on two real-life projects (preparing data, training models, fine-tuning models, and deployments) for object detection and instance segmentation tasks using Detectron2. Finally, you’ll deploy Detectron2 models into production and develop Detectron2 applications for mobile devices. By the end of this deep learning book, you’ll have gained sound theoretical knowledge and useful hands-on skills to help you solve advanced computer vision tasks using Detectron2.
Table of Contents (20 chapters)
1
Part 1: Introduction to Detectron2
4
Part 2: Developing Custom Object Detection Models
12
Part 3: Developing a Custom Detectron2 Model for Instance Segmentation Tasks
15
Part 4: Deploying Detectron2 Models into Production

Annotation formats

Similar to labeling tools, many different annotation formats are available for annotating images for computer vision applications. The common standards include COCO JSON, Pascal VOC XML, and YOLO PyTorch TXT. There are many more formats (e.g., TensorFlow TFRecord, CreateML JSON, and so on). However, this section covers only the previously listed three most common annotation standards due to space limitations. Furthermore, this section uses two images and labels extracted from the test set of the brain tumor object detection dataset available from Kaggle (https://www.kaggle.com/datasets/davidbroberts/brain-tumor-object-detection-datasets) to illustrate these data formats and demonstrate their differences, as shown in Figure 3.4. This section briefly discusses the key points of each annotation format, and interested readers can refer to the GitHub page of this chapter to inspect this same dataset in different formats in further detail.

Figure 3.4: Two images and tumor labels used to illustrate different annotation formats

Figure...