Book Image

TensorFlow 2.0 Computer Vision Cookbook

By : Jesús Martínez
Book Image

TensorFlow 2.0 Computer Vision Cookbook

By: Jesús Martínez

Overview of this book

Computer vision is a scientific field that enables machines to identify and process digital images and videos. This book focuses on independent recipes to help you perform various computer vision tasks using TensorFlow. The book begins by taking you through the basics of deep learning for computer vision, along with covering TensorFlow 2.x’s key features, such as the Keras and tf.data.Dataset APIs. You’ll then learn about the ins and outs of common computer vision tasks, such as image classification, transfer learning, image enhancing and styling, and object detection. The book also covers autoencoders in domains such as inverse image search indexes and image denoising, while offering insights into various architectures used in the recipes, such as convolutional neural networks (CNNs), region-based CNNs (R-CNNs), VGGNet, and You Only Look Once (YOLO). Moving on, you’ll discover tips and tricks to solve any problems faced while building various computer vision applications. Finally, you’ll delve into more advanced topics such as Generative Adversarial Networks (GANs), video processing, and AutoML, concluding with a section focused on techniques to help you boost the performance of your networks. By the end of this TensorFlow book, you’ll be able to confidently tackle a wide range of computer vision problems using TensorFlow 2.x.
Table of Contents (14 chapters)

Using incremental learning to train a classifier

One of the problems of traditional machine learning libraries, such as scikit-learn, is that they seldom offer the possibility to train models on high volumes of data, which, coincidentally, is the best type of data for deep neural networks. What good is having large amounts of data if we can't use it?

Fortunately, there is a way to circumvent this limitation, and it's called incremental learning. In this recipe, we'll use a powerful library, creme, to train a classifier on a dataset too big to fit in memory.

Getting ready

In this recipe, we'll leverage creme, an experimental library specifically designed to train machine learning models on huge datasets that are too big to fit in memory. To install creme, execute the following command:

$> pip install creme==0.5.1

We'll use the features.hdf5 dataset we generated in the Implementing a feature extractor using a pre-trained network recipe in this...