Book Image

Machine Learning for OpenCV 4 - Second Edition

By : Aditya Sharma, Vishwesh Ravi Shrimali, Michael Beyeler
Book Image

Machine Learning for OpenCV 4 - Second Edition

By: Aditya Sharma, Vishwesh Ravi Shrimali, Michael Beyeler

Overview of this book

OpenCV is an opensource library for building computer vision apps. The latest release, OpenCV 4, offers a plethora of features and platform improvements that are covered comprehensively in this up-to-date second edition. You'll start by understanding the new features and setting up OpenCV 4 to build your computer vision applications. You will explore the fundamentals of machine learning and even learn to design different algorithms that can be used for image processing. Gradually, the book will take you through supervised and unsupervised machine learning. You will gain hands-on experience using scikit-learn in Python for a variety of machine learning applications. Later chapters will focus on different machine learning algorithms, such as a decision tree, support vector machines (SVM), and Bayesian learning, and how they can be used for object detection computer vision operations. You will then delve into deep learning and ensemble learning, and discover their real-world applications, such as handwritten digit classification and gesture recognition. Finally, you’ll get to grips with the latest Intel OpenVINO for building an image processing system. By the end of this book, you will have developed the skills you need to use machine learning for building intelligent computer vision applications with OpenCV 4.
Table of Contents (18 chapters)
Free Chapter
1
Section 1: Fundamentals of Machine Learning and OpenCV
6
Section 2: Operations with OpenCV
11
Section 3: Advanced Machine Learning with OpenCV

Understanding multilayer perceptrons

In order to create nonlinear decision boundaries, we can combine multiple perceptrons to form a larger network. This is also known as a multilayer perceptron (MLP). MLPs usually consist of at least three layers, where the first layer has a node (or neuron) for every input feature of the dataset, and the last layer has a node for every class label. The layer in between is called the hidden layer.

An example of this feedforward neural network architecture is shown in the following diagram:

In this network, every circle is an artificial neuron (or, essentially, a perceptron), and the output of one artificial neuron might serve as input to the next artificial neuron, much like how real biological neurons are wired up in the brain. By placing perceptrons side by side, we get a single one-layer neural network. Analogously, by stacking one one-layer...