Book Image

Hands-On Deep Learning with TensorFlow

By : Dan Van Boxel
Book Image

Hands-On Deep Learning with TensorFlow

By: Dan Van Boxel

Overview of this book

Dan Van Boxel’s Deep Learning with TensorFlow is based on Dan’s best-selling TensorFlow video course. With deep learning going mainstream, making sense of data and getting accurate results using deep networks is possible. Dan Van Boxel will be your guide to exploring the possibilities with deep learning; he will enable you to understand data like never before. With the efficiency and simplicity of TensorFlow, you will be able to process your data and gain insights that will change how you look at data. With Dan’s guidance, you will dig deeper into the hidden layers of abstraction using raw data. Dan then shows you various complex algorithms for deep learning and various examples that use these deep neural networks. You will also learn how to train your machine to craft new features to make sense of deeper layers of data. In this book, Dan shares his knowledge across topics such as logistic regression, convolutional neural networks, recurrent neural networks, training deep networks, and high level interfaces. With the help of novel practical examples, you will become an ace at advanced multilayer networks, image recognition, and beyond.
Table of Contents (12 chapters)

Deeper CNN


In this section, we're going to add another convolutional layer to our model. Don't worry, we'll walk through the parameters to make sizing line up and we'll learn what dropout training is.

Adding a layer to another layer of CNN

As usual, when starting a new model, make a fresh IPython session and execute the code up to num_filters1. Great, now you're all set to start learning. Let's jump into our convolutional model.

Why don't we be ambitious and set the first convolutional layer to have 16 filters, far more than the 4 from our old model. But, we'll use a smaller window size this time. Just 3x3. Also note that we changed some variable names such as num_filters to num_filters1. This is because we're going to have another convolutional layer in just a moment and we might want a different number of filters on each. The rest of this layer is exactly as it was before, we can convolve and do 2x2 max pooling and we use the rectified linear activation unit.

Now we add another convolutional...