#### Overview of this book

Dan Van Boxel’s Deep Learning with TensorFlow is based on Dan’s best-selling TensorFlow video course. With deep learning going mainstream, making sense of data and getting accurate results using deep networks is possible. Dan Van Boxel will be your guide to exploring the possibilities with deep learning; he will enable you to understand data like never before. With the efficiency and simplicity of TensorFlow, you will be able to process your data and gain insights that will change how you look at data. With Dan’s guidance, you will dig deeper into the hidden layers of abstraction using raw data. Dan then shows you various complex algorithms for deep learning and various examples that use these deep neural networks. You will also learn how to train your machine to craft new features to make sense of deeper layers of data. In this book, Dan shares his knowledge across topics such as logistic regression, convolutional neural networks, recurrent neural networks, training deep networks, and high level interfaces. With the help of novel practical examples, you will become an ace at advanced multilayer networks, image recognition, and beyond.
Hands-On Deep Learning with TensorFlow
Credits
www.PacktPub.com
Customer Feedback
Preface
Free Chapter
Getting Started
Deep Neural Networks
Convolutional Neural Networks
Introducing Recurrent Neural Networks
Wrapping Up
Index

## Simple computations

First, we're going to take a look at the tensor object type. Then we'll have a graphical understanding of TensorFlow to define computations. Finally, we'll run the graphs with sessions, showing how to substitute intermediate values.

### Defining scalars and tensors

The first thing you need to do is download the source code pack for this book and open the `simple.py` file. You can either use this file to copy and paste lines into TensorFlow or CoCalc, or type them directly yourselves. First, let's import `tensorflow` as `tf`. This is a convenient way to refer to it in Python. You'll want to hold your constant numbers in `tf.constant` calls. For example, let's do `a = tf.constant(1)` and `b = tf.constant(2)`:

```import tensorflow as tf
# You can create constants in TF to hold specific values
a = tf.constant(1)
b = tf.constant(2)```

Of course, you can add and multiply these to get other values, namely `c` and `d`:

```# Of course you can add, multiply, and compute on these as you like
c = a + b
d = a * b```

TensorFlow numbers are stored in tensors, a fancy term for multidimensional arrays. If you pass a Python list to TensorFlow, it does the right thing and converts it into an appropriately dimensioned tensor. You can see this illustrated in the following code:

```# TF numbers are stored in "tensors", a fancy term for multidimensional arrays. If you pass TF a Python list, it can convert it
V1 = tf.constant([1., 2.])   # Vector, 1-dimensional
V2 = tf.constant([3., 4.])   # Vector, 1-dimensional
M = tf.constant([[1., 2.]])             # Matrix, 2d
N = tf.constant([[1., 2.],[3.,4.]])     # Matrix, 2d
K = tf.constant([[[1., 2.],[3.,4.]]])   # Tensor, 3d+```

The `V1` vector, a one-dimensional tensor, is passed as a Python list of `[1. , 2.]`. The dots here just force Python to store the number as decimal values rather than integers. The `V2` vector is another Python list of `[3. , 4. ]`. The `M` variable is a two-dimensional matrix made from a list of lists in Python, creating a two-dimensional tensor in TensorFlow. The `N` variable is also a two-dimensional matrix. Note that this one actually has multiple rows in it. Finally, `K` is a true tensor, containing three dimensions. Note that the final dimension contains just one entry, a single two-by-two box.

Don't worry if this terminology is a bit confusing. Whenever you see a strange new variable, you can jump back to this point to understand what it might be.

### Computations on tensors

You can also do simple things, such as add tensors together:

`V3 = V1 + V2`

Alternatively, you can multiply them element-wise, so each common position is multiplied together:

```# Operations are element-wise by default
M2 = M * M```

For true matrix multiplication, however, you need to use `tf.matmul`, passing in your two tensors as arguments:

`NN = tf.matmul(N,N)`

### Doing computation

Everything so far has just specified the TensorFlow graph; we haven't yet computed anything. To do this, we need to start a session in which the computations will take place. The following code creates a new session:

`sess = tf.Session()`

Once you have a session open, doing: `sess.run(NN)` will evaluate the given expression and return an array. We can easily send this to a variable by doing the following:

```output = sess.run(NN)
print("NN is:")
print(output)```

If you run this cell now, you should see the correct tensor array for the `NN` output on the screen:

When you're done using your session, it's good to close it, just like you would close a file handle:

```# Remember to close your session when you're done using it
sess.close()```

For interactive work, we can use `tf.InteractiveSession()` like so:

`sess = tf.InteractiveSession()`

You can then easily compute the value of any node. For example, entering the following code and running the cell will output the value of `M2`:

```# Now we can compute any node
print("M2 is:")
print(M2.eval())```

### Variable tensors

Of course, not all our numbers are constant. To update weights in a neural network, for example, we need to use `tf.Variable` to create the appropriate object:

`W = tf.Variable(0, name="weight")`

Note that variables in TensorFlow are not initialized automatically. To do so, we need to use a special call, namely `tf.global_variables_initializer()`, and then run that call with `sess.run()`:

```init_op = tf.global_variables_initializer()
sess.run(init_op)```

This is to put a value in that variable. In this case, it will stuff a `0` value into the `W` variable. Let's just verify that `W` has that value:

```print("W is:")
print(W.eval())```

You should see an output value for `W` of `0` in your cell:

Let's see what happens when you add `a` to it:

```W += a
print(W.eval())```

Recall that `a` is `1`, so you get the expected value of `1` here:

Let's add `a` again, just to make sure we can increment and that it's truly a variable:

```W += a
print(W.eval())```

Now you should see that `W` is holding `2`, as we have incremented it twice with `a`:

### Viewing and substituting intermediate values

You can return or supply arbitrary nodes when doing a TensorFlow computation. Let's define a new node but also return another node at the same time in a fetch call. First, let's define our new node `E`, as shown here:

`E = d + b # 1*2 + 2 = 4`

Let's take a look at what `E` starts as:

```print("E as defined:")
print(E.eval())```

You should see that, as expected, `E` equals `4`. Now let's see how we can pass in multiple nodes, `E` and `d`, to return multiple values from a `sess.run` call:

```# Let's see what d was at the same time
print("E and d:")
print(sess.run([E,d]))```

You should see multiple values, namely `4` and `2`, returned in your output:

Now suppose we want to use a different intermediate value, say for debugging purposes. We can use `feed_dict` to supply a custom value to a node anywhere in our computation when returning a value. Let's do that now with `d` equals `4` instead of `2`:

```# Use a custom d by specifying a dictionary
print("E with custom d=4:")
print(sess.run(E, feed_dict = {d:4.}))```

Remember that `E `equals` d + b` and the values of `d` and `b` are both `2`. Although we've inserted a new value of `4` for `d`, you should see that the value of `E` will now be output as `6`:

You have now learned how to do core computations with TensorFlow tensors. It's time to take the next step forward by building a logistic regression model.