Book Image

Mastering Computer Vision with TensorFlow 2.x

By : Krishnendu Kar
Book Image

Mastering Computer Vision with TensorFlow 2.x

By: Krishnendu Kar

Overview of this book

Computer vision allows machines to gain human-level understanding to visualize, process, and analyze images and videos. This book focuses on using TensorFlow to help you learn advanced computer vision tasks such as image acquisition, processing, and analysis. You'll start with the key principles of computer vision and deep learning to build a solid foundation, before covering neural network architectures and understanding how they work rather than using them as a black box. Next, you'll explore architectures such as VGG, ResNet, Inception, R-CNN, SSD, YOLO, and MobileNet. As you advance, you'll learn to use visual search methods using transfer learning. You'll also cover advanced computer vision concepts such as semantic segmentation, image inpainting with GAN's, object tracking, video segmentation, and action recognition. Later, the book focuses on how machine learning and deep learning concepts can be used to perform tasks such as edge detection and face recognition. You'll then discover how to develop powerful neural network models on your PC and on various cloud platforms. Finally, you'll learn to perform model optimization methods to deploy models on edge devices for real-time inference. By the end of this book, you'll have a solid understanding of computer vision and be able to confidently develop models to automate tasks.
Table of Contents (18 chapters)
1
Section 1: Introduction to Computer Vision and Neural Networks
6
Section 2: Advanced Concepts of Computer Vision with TensorFlow
11
Section 3: Advanced Implementation of Computer Vision with TensorFlow
14
Section 4: TensorFlow Implementation at the Edge and on the Cloud

An overview of the Feature Pyramid Network and RetinaNet

We have learned from Chapter 5, Neural Network Architecture and Models, that each layer of a CNN is a feature vector in itself. There are two critical and interdependent parameters associated with this, as explained here:

  • As we go up the CNN of the image through various convolution layers to the fully connected layer, we identify more features (semantically strong), from a simple edge to a feature of an object to a complete object. However, in doing so, the resolution of the image decreases as the feature width and height decreases while its depth increases.
  • Objects of different scales (small versus large) are affected by this resolution and dimension. As the following diagram shows, a smaller object will be harder to detect at the highest layer because its features will be so blurred that the CNN will not be able to detect...