Book Image

Hands-On Computer Vision with TensorFlow 2

By : Benjamin Planche, Eliot Andres
Book Image

Hands-On Computer Vision with TensorFlow 2

By: Benjamin Planche, Eliot Andres

Overview of this book

Computer vision solutions are becoming increasingly common, making their way into fields such as health, automobile, social media, and robotics. This book will help you explore TensorFlow 2, the brand new version of Google's open source framework for machine learning. You will understand how to benefit from using convolutional neural networks (CNNs) for visual tasks. Hands-On Computer Vision with TensorFlow 2 starts with the fundamentals of computer vision and deep learning, teaching you how to build a neural network from scratch. You will discover the features that have made TensorFlow the most widely used AI library, along with its intuitive Keras interface. You'll then move on to building, training, and deploying CNNs efficiently. Complete with concrete code examples, the book demonstrates how to classify images with modern solutions, such as Inception and ResNet, and extract specific content using You Only Look Once (YOLO), Mask R-CNN, and U-Net. You will also build generative adversarial networks (GANs) and variational autoencoders (VAEs) to create and edit images, and long short-term memory networks (LSTMs) to analyze videos. In the process, you will acquire advanced insights into transfer learning, data augmentation, domain adaptation, and mobile and web deployment, among other key concepts. By the end of the book, you will have both the theoretical understanding and practical skills to solve advanced computer vision problems with TensorFlow 2.0.
Table of Contents (16 chapters)
Free Chapter
1
Section 1: TensorFlow 2 and Deep Learning Applied to Computer Vision
5
Section 2: State-of-the-Art Solutions for Classic Recognition Problems
9
Section 3: Advanced Concepts and New Frontiers of Computer Vision
14
Assessments

Example app – recognizing facial expressions

To directly apply the notions presented in this chapter, we will develop an app making use of a lightweight computer vision model, and we will deploy it to various platforms.

We will build an app that classifies facial expressions. When pointed to a person's face, it will output the expression of that person—happy, sad, surprised, disgusted, angry, or neutral. We will train our model on the Facial Expression Recognition (FER) dataset available at https://www.kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge, put together by Pierre-Luc Carrier and Aaron Courville. It is composed of 28,709 grayscale images of 48 × 48 in size:

Figure 9-7: Images sampled from the FER dataset

Inside the app, the naive approach would be to capture images with the camera and then feed them directly...