Book Image

Learning OpenCV 4 Computer Vision with Python 3 - Third Edition

By : Joseph Howse, Joe Minichino
Book Image

Learning OpenCV 4 Computer Vision with Python 3 - Third Edition

By: Joseph Howse, Joe Minichino

Overview of this book

Computer vision is a rapidly evolving science, encompassing diverse applications and techniques. This book will not only help those who are getting started with computer vision but also experts in the domain. You’ll be able to put theory into practice by building apps with OpenCV 4 and Python 3. You’ll start by understanding OpenCV 4 and how to set it up with Python 3 on various platforms. Next, you’ll learn how to perform basic operations such as reading, writing, manipulating, and displaying still images, videos, and camera feeds. From taking you through image processing, video analysis, and depth estimation and segmentation, to helping you gain practice by building a GUI app, this book ensures you’ll have opportunities for hands-on activities. Next, you’ll tackle two popular challenges: face detection and face recognition. You’ll also learn about object classification and machine learning concepts, which will enable you to create and use object detectors and classifiers, and even track objects in movies or video camera feed. Later, you’ll develop your skills in 3D tracking and augmented reality. Finally, you’ll cover ANNs and DNNs, learning how to develop apps for recognizing handwritten digits and classifying a person's gender and age. By the end of this book, you’ll have the skills you need to execute real-world computer vision projects.
Table of Contents (13 chapters)

Appendix A: Bending Color Space with the Curves Filter

Starting in Chapter 3, Processing Images with OpenCV, our Cameo demo application incorporated an image processing effect called curves, which it uses to emulate the color bias of certain photo films. This Appendix describes the concept of curves and their implementation using SciPy.

Curves are a technique for remapping colors. With curves, a channel's value at a destination pixel is a function of (only) the same channel's value at the source pixel. Moreover, we do not define functions directly; instead, for each function, we define a set of control points that the function must fit by means of interpolation. In pseudocode, for a BGR image, we have the following:

dst.b = funcB(src.b) where funcB interpolates pointsB
dst.g = funcG(src.g) where funcG interpolates pointsG
dst.r = funcR(src.r) where funcR interpolates pointsR...