Book Image

TensorFlow 2.0 Computer Vision Cookbook

By : Jesús Martínez
Book Image

TensorFlow 2.0 Computer Vision Cookbook

By: Jesús Martínez

Overview of this book

Computer vision is a scientific field that enables machines to identify and process digital images and videos. This book focuses on independent recipes to help you perform various computer vision tasks using TensorFlow. The book begins by taking you through the basics of deep learning for computer vision, along with covering TensorFlow 2.x’s key features, such as the Keras and tf.data.Dataset APIs. You’ll then learn about the ins and outs of common computer vision tasks, such as image classification, transfer learning, image enhancing and styling, and object detection. The book also covers autoencoders in domains such as inverse image search indexes and image denoising, while offering insights into various architectures used in the recipes, such as convolutional neural networks (CNNs), region-based CNNs (R-CNNs), VGGNet, and You Only Look Once (YOLO). Moving on, you’ll discover tips and tricks to solve any problems faced while building various computer vision applications. Finally, you’ll delve into more advanced topics such as Generative Adversarial Networks (GANs), video processing, and AutoML, concluding with a section focused on techniques to help you boost the performance of your networks. By the end of this TensorFlow book, you’ll be able to confidently tackle a wide range of computer vision problems using TensorFlow 2.x.
Table of Contents (14 chapters)

Chapter 3: Harnessing the Power of Pre-Trained Networks with Transfer Learning

Despite the undeniable power deep neural networks bring to computer vision, they are very complex to tune, train, and make performant. This difficulty comes from three main sources:

  • Deep neural networks start to pay off when we have sufficient data, but more often than not, this is not the case. Furthermore, data is expensive and, sometimes, impossible to expand.
  • Deep neural networks contain a wide range of parameters that need tuning and can affect the overall performance of the model.
  • Deep learning is very resource-intensive in terms of time, hardware, and effort.

Do not be dismayed! With transfer learning, we can save ourselves loads of time and effort by leveraging the rich amount of knowledge present in seminal architectures that have been pre-trained on gargantuan datasets, such as ImageNet. And the best part? Besides being such a powerful and useful tool, transfer learning...