Book Image

The TensorFlow Workshop

By : Matthew Moocarme, Abhranshu Bagchi, Anthony So, Anthony Maddalone
Book Image

The TensorFlow Workshop

By: Matthew Moocarme, Abhranshu Bagchi, Anthony So, Anthony Maddalone

Overview of this book

Getting to grips with tensors, deep learning, and neural networks can be intimidating and confusing for anyone, no matter their experience level. The breadth of information out there, often written at a very high level and aimed at advanced practitioners, can make getting started even more challenging. If this sounds familiar to you, The TensorFlow Workshop is here to help. Combining clear explanations, realistic examples, and plenty of hands-on practice, it’ll quickly get you up and running. You’ll start off with the basics – learning how to load data into TensorFlow, perform tensor operations, and utilize common optimizers and activation functions. As you progress, you’ll experiment with different TensorFlow development tools, including TensorBoard, TensorFlow Hub, and Google Colab, before moving on to solve regression and classification problems with sequential models. Building on this solid foundation, you’ll learn how to tune models and work with different types of neural network, getting hands-on with real-world deep learning applications such as text encoding, temperature forecasting, image augmentation, and audio processing. By the end of this deep learning book, you’ll have the skills, knowledge, and confidence to tackle your own ambitious deep learning projects with TensorFlow.
Table of Contents (13 chapters)
Preface

Introduction

Sequential data refers to datasets in which each data point is dependent on the previous ones. Think of it like a sentence, which is composed of a sequence of words that are related to each other. A verb will be linked to a subject and an adverb will be related to a verb. Another example is a stock price, where the price on a particular day is related to the price of the previous days. Traditional neural networks are not fit for processing this kind of data. There is a specific type of architecture that can ingest sequences of data. This chapter will introduce you to such models—known as recurrent neural networks (RNNs).

An RNN model is a specific type of deep learning architecture in which the output of the model feeds back into the input. Models of this kind have their own challenges (known as vanishing and exploding gradients) that will be addressed later in the chapter.

In many ways, an RNN is a representation of how a brain might work. RNNs use...