Book Image

OpenCV 3 Blueprints

Book Image

OpenCV 3 Blueprints

Overview of this book

Computer vision is becoming accessible to a large audience of software developers who can leverage mature libraries such as OpenCV. However, as they move beyond their first experiments in computer vision, developers may struggle to ensure that their solutions are sufficiently well optimized, well trained, robust, and adaptive in real-world conditions. With sufficient knowledge of OpenCV, these developers will have enough confidence to go about creating projects in the field of computer vision. This book will help you tackle increasingly challenging computer vision problems that you may face in your careers. It makes use of OpenCV 3 to work around some interesting projects. Inside these pages, you will find practical and innovative approaches that are battle-tested in the authors’ industry experience and research. Each chapter covers the theory and practice of multiple complementary approaches so that you will be able to choose wisely in your future projects. You will also gain insights into the architecture and algorithms that underpin OpenCV’s functionality. We begin by taking a critical look at inputs in order to decide which kinds of light, cameras, lenses, and image formats are best suited to a given purpose. We proceed to consider the finer aspects of computational photography as we build an automated camera to assist nature photographers. You will gain a deep understanding of some of the most widely applicable and reliable techniques in object detection, feature selection, tracking, and even biometric recognition. We will also build Android projects in which we explore the complexities of camera motion: first in panoramic image stitching and then in video stabilization. By the end of the book, you will have a much richer understanding of imaging, motion, machine learning, and the architecture of computer vision libraries and applications!
Table of Contents (9 chapters)
8
Index

What's next?


What we have right now is a very barebones implementation of video stabilization. There are a few more things you can add to it to make it more robust, more automated and the output more pleasing to the eye. Here are a few things to get you started.

Identifying gyroscope axes

In this chapter, we've hard-coded the axes of the gyroscope. This might not be the case for all mobile phone manufacturers. Using a similar calibration technique, you should be able to find an axes configuration that minimizes errors across the video.

Estimating the rolling shutter direction

We've hard-coded the direction of the rolling shutter. Using specific techniques (like blinking an LED really fast at the camera), it is possible to estimate the direction of the rolling shutter and incorporate that into the calibration code. Certain camera sensors don't have the rolling shutter artifacts at all. This test can also identify if such a sensor is being used.

Smoother timelapses

Now that we've stabilized the...