Book Image

Hands-On Convolutional Neural Networks with TensorFlow

By : Iffat Zafar, Giounona Tzanidou, Richard Burton, Nimesh Patel, Leonardo Araujo
Book Image

Hands-On Convolutional Neural Networks with TensorFlow

By: Iffat Zafar, Giounona Tzanidou, Richard Burton, Nimesh Patel, Leonardo Araujo

Overview of this book

Convolutional Neural Networks (CNN) are one of the most popular architectures used in computer vision apps. This book is an introduction to CNNs through solving real-world problems in deep learning while teaching you their implementation in popular Python library - TensorFlow. By the end of the book, you will be training CNNs in no time! We start with an overview of popular machine learning and deep learning models, and then get you set up with a TensorFlow development environment. This environment is the basis for implementing and training deep learning models in later chapters. Then, you will use Convolutional Neural Networks to work on problems such as image classification, object detection, and semantic segmentation. After that, you will use transfer learning to see how these models can solve other deep learning problems. You will also get a taste of implementing generative models such as autoencoders and generative adversarial networks. Later on, you will see useful tips on machine learning best practices and troubleshooting. Finally, you will learn how to apply your models on large datasets of millions of images.
Table of Contents (17 chapters)
Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
Index

Creating TensorFlow graphs


Now that our data is all set up, we can construct our model that will learn how to classify iris flowers. We'll construct one of the simplest machine learning models—a linear classifier, as follows:

A linear classifier works by calculating the dot product between an input feature vector x and a weight vector w. After calculating the dot product, we add a value to the result called a bias term b. In our case, we have three possible classes any input feature vector could belong to, so we need to compute three different dot products with w1, w2, and w3 to see which class it belongs to. But, rather than writing out three separate dot products, we can just do one matrix multiply between a matrix of weights of shape [3,4] and our input vector. In the following figure, we can see more clearly what it looks like:

We can also just simplify this equation down to the more compact form as follows, where our weight matrix is W, bias is b, x is our input feature vector and the resulting output is s:

Variables

How do we write this all out in TensorFlow code? Let's start by creating our weights and biases. In TensorFlow, if we want to create some Tensors that can be manipulated by our code, then we need to use TensorFlow variables. TensorFlow variables are instances of the tf.Variable class. A tf.Variable class represents a tf.Tensor object that can have its values changed by running TensorFlow operations on it. Variables are Tensor-like objects, so they can be passed around in the same ways Tensors can and any operation that can be used with a Tensor can be used with a variable.

To create a variable, we can use tf.get_variable(). When you call this function, you must supply a name for your variable. This function will first check that there is no other variable with the same name already on the graph, and if there isn't, then it will create and add a new one to the TensorFlow graph.

You must also specify the shape that you want your variable to have, or alternatively, you can initialize your variable using a tf.constant Tensor. The variable will take the value of your constant Tensor and the shape will be automatically inferred. For example, the following will produce a 1x2 Tensor containing the values 21 and 25:

my_variable = tf.get_variable(name= "my_variable", initializer=tf.constant([21, 25]))

Operations

It's all well and good having variables in our graph, but we also want to do something with them. We can use TensorFlow ops to manipulate our variables.

As explained, our linear classifier is just a matrix multiply so the first op you will use is funnily enough going to be the matrix multiply op. Simply call tf.matmul() on two Tensors you want to multiply together and the result will be the matrix multiplication of the two Tensors you passed in. Simple!

Throughout this book, you will learn about many different TensorFlow ops that you will need to use.

Now that you hopefully have a little understanding about variables and ops, let's construct our linear model. We'll define our model within a function. The function will take as input N lots of our feature vectors or to be more precise a batch of size N. As our feature vectors are of length 4, our batch will be an [N, 4] shape Tensor. The function will then return the output of our linear model. In the following code, we have written our linear model function, it should be self explanatory but keep reading if you have not completely understood it yet.

def linear_model(input):
# Create variables for our weights and biases 
my_weights = tf.get_variable(name="weights", shape=[4,3]) 
my_bias = tf.get_variable(name="bias", shape=[3]) 
 
# Create a linear classifier. 
linear_layer = tf.matmul(input, my_weights)  
linear_layer_out = tf.nn.bias_add(value=linear_layer, bias=my_bias) 
return linear_layer_out 

In the code here, we create variables that will store our weights and biases. We give them names and supply the required shapes. Remember we are using variables as we want to manipulate their values using operations.

 

Next, we create a tf.matmul node that takes as argument our input feature matrix and our weight matrix. The result of this op can be accessed through our linear_layer Python variable. This result is then passed to another op, tf.nn.bias_add. This op comes from the NN (neural network) module and is used when we wish to add a bias vector to the result of a calculation. A bias has to be a one-dimensional Tensor.