Book Image

TensorFlow 2.0 Computer Vision Cookbook

By : Jesús Martínez
Book Image

TensorFlow 2.0 Computer Vision Cookbook

By: Jesús Martínez

Overview of this book

Computer vision is a scientific field that enables machines to identify and process digital images and videos. This book focuses on independent recipes to help you perform various computer vision tasks using TensorFlow. The book begins by taking you through the basics of deep learning for computer vision, along with covering TensorFlow 2.x’s key features, such as the Keras and tf.data.Dataset APIs. You’ll then learn about the ins and outs of common computer vision tasks, such as image classification, transfer learning, image enhancing and styling, and object detection. The book also covers autoencoders in domains such as inverse image search indexes and image denoising, while offering insights into various architectures used in the recipes, such as convolutional neural networks (CNNs), region-based CNNs (R-CNNs), VGGNet, and You Only Look Once (YOLO). Moving on, you’ll discover tips and tricks to solve any problems faced while building various computer vision applications. Finally, you’ll delve into more advanced topics such as Generative Adversarial Networks (GANs), video processing, and AutoML, concluding with a section focused on techniques to help you boost the performance of your networks. By the end of this TensorFlow book, you’ll be able to confidently tackle a wide range of computer vision problems using TensorFlow 2.x.
Table of Contents (14 chapters)

Using rank-N accuracy to evaluate performance

Most of the time, when we're training deep learning-based image classifiers, we care about the accuracy, which is a binary measure of a model's performance, based on a one-on-one comparison between its predictions and the ground-truth labels. When the model says there's a leopard in a photo, is there actually a leopard there? In other words, we measure how precise the model is.

However, for more complex datasets, this way of assessing a network's learning might be counterproductive and even unfair, because it's too restrictive. What if the model didn't classify the feline in the picture as a leopard but as a tiger? Moreover, what if the second most probable class was, indeed, a leopard? This means the model has some more learning to do, but it's getting there! That's valuable!

This is the reasoning behind rank-N accuracy, a more lenient and fairer way of measuring a predictive model's...