Book Image

Hands-On Neural Networks with TensorFlow 2.0

By : Paolo Galeone
Book Image

Hands-On Neural Networks with TensorFlow 2.0

By: Paolo Galeone

Overview of this book

TensorFlow, the most popular and widely used machine learning framework, has made it possible for almost anyone to develop machine learning solutions with ease. With TensorFlow (TF) 2.0, you'll explore a revamped framework structure, offering a wide variety of new features aimed at improving productivity and ease of use for developers. This book covers machine learning with a focus on developing neural network-based solutions. You'll start by getting familiar with the concepts and techniques required to build solutions to deep learning problems. As you advance, you’ll learn how to create classifiers, build object detection and semantic segmentation networks, train generative models, and speed up the development process using TF 2.0 tools such as TensorFlow Datasets and TensorFlow Hub. By the end of this TensorFlow book, you'll be ready to solve any machine learning problem by developing solutions using TF 2.0 and putting them into production.
Table of Contents (15 chapters)
Free Chapter
1
Section 1: Neural Network Fundamentals
4
Section 2: TensorFlow Fundamentals
8
Section 3: The Application of Neural Networks

Efficient data input pipelines

Data is the most critical part of every machine learning pipeline; the model learns from it, and its quantity and quality are game-changers of every machine learning application.

Feeding data to a Keras model has so far seemed natural: we can fetch the dataset as a NumPy array, create the batches, and feed the batches to the model to train it using mini-batch gradient descent.

However, the way of feeding the input shown so far is, in fact, hugely inefficient and error-prone, for the following reasons:

  • The complete dataset can weight several thousands of GBs: no single standard computer or even a deep learning workstation has the memory required to load huge datasets in memory.
  • Manually creating the input batches means taking care of the slicing indexes manually; errors can happen.
  • Doing data augmentation, applying random perturbations to each input...