Book Image

Modern Computer Vision with PyTorch

By : V Kishore Ayyadevara, Yeshwanth Reddy
Book Image

Modern Computer Vision with PyTorch

By: V Kishore Ayyadevara, Yeshwanth Reddy

Overview of this book

Deep learning is the driving force behind many recent advances in various computer vision (CV) applications. This book takes a hands-on approach to help you to solve over 50 CV problems using PyTorch1.x on real-world datasets. You’ll start by building a neural network (NN) from scratch using NumPy and PyTorch and discover best practices for tweaking its hyperparameters. You’ll then perform image classification using convolutional neural networks and transfer learning and understand how they work. As you progress, you’ll implement multiple use cases of 2D and 3D multi-object detection, segmentation, human-pose-estimation by learning about the R-CNN family, SSD, YOLO, U-Net architectures, and the Detectron2 platform. The book will also guide you in performing facial expression swapping, generating new faces, and manipulating facial expressions as you explore autoencoders and modern generative adversarial networks. You’ll learn how to combine CV with NLP techniques, such as LSTM and transformer, and RL techniques, such as Deep Q-learning, to implement OCR, image captioning, object detection, and a self-driving car agent. Finally, you'll move your NN model to production on the AWS Cloud. By the end of this book, you’ll be able to leverage modern NN architectures to solve over 50 real-world CV problems confidently.
Table of Contents (25 chapters)
1
Section 1 - Fundamentals of Deep Learning for Computer Vision
5
Section 2 - Object Classification and Detection
13
Section 3 - Image Manipulation
17
Section 4 - Combining Computer Vision with Other Techniques
Artificial Neural Network Fundamentals

An Artificial Neural Network (ANN) is a supervised learning algorithm that is loosely inspired by the way the human brain functions. Similar to the way neurons are connected and activated in the human brain, a neural network takes input and passes it through a function, resulting in certain subsequent neurons getting activated, and consequently producing the output.

There are several standard ANN architectures. The universal approximation theorem says that we can always find a large enough neural network architecture with the right set of weights that can exactly predict any output for any given input. This means, for a given dataset/task we can create an architecture and keep adjusting its weights until the ANN predicts what we want it to predict. Adjusting the weights until this happens is called training the neural network. Successful training on large datasets and customized architecture is how ANNs have gained prominence in solving various relevant tasks.

One of the prominent tasks in computer vision is to recognize the class of the object present in an image. ImageNet was a competition held to identify the class of objects present in an image. The reduction in classification error rate over the years is as follows:

The year 2012 was when a neural network (AlexNet) was used in the winning solution of the competition. As you can see from the preceding chart, there was a considerable reduction in errors from the year 2011 to the year 2012 by leveraging neural networks. Over time since then, with more deep and complex neural networks, the classification error kept reducing and has beaten human-level performance. This gives a solid motivation for us to learn and implement neural networks for our custom tasks, where applicable.

In this chapter, we will create a very simple architecture on a simple dataset and mainly focus on how the various building blocks (feedforward, backpropagation, learning rate) of an ANN help in adjusting the weights so that the network learns to predict the expected outputs from given inputs. We will first learn, mathematically, what a neural network is, and then build one from scratch to have a solid foundation. Then we will learn about each component responsible for training the neural network and code them as well. Overall, we will cover the following topics:

  • Comparing AI and traditional machine learning
  • Learning about the artificial neural network building blocks
  • Implementing feedforward propagation
  • Implementing backpropagation
  • Putting feedforward propagation and backpropagation together
  • Understanding the impact of the learning rate
  • Summarizing the training process of a neural network