Book Image

Hands-On Convolutional Neural Networks with TensorFlow

By : Iffat Zafar, Giounona Tzanidou, Richard Burton, Nimesh Patel, Leonardo Araujo
Book Image

Hands-On Convolutional Neural Networks with TensorFlow

By: Iffat Zafar, Giounona Tzanidou, Richard Burton, Nimesh Patel, Leonardo Araujo

Overview of this book

Convolutional Neural Networks (CNN) are one of the most popular architectures used in computer vision apps. This book is an introduction to CNNs through solving real-world problems in deep learning while teaching you their implementation in popular Python library - TensorFlow. By the end of the book, you will be training CNNs in no time! We start with an overview of popular machine learning and deep learning models, and then get you set up with a TensorFlow development environment. This environment is the basis for implementing and training deep learning models in later chapters. Then, you will use Convolutional Neural Networks to work on problems such as image classification, object detection, and semantic segmentation. After that, you will use transfer learning to see how these models can solve other deep learning problems. You will also get a taste of implementing generative models such as autoencoders and generative adversarial networks. Later on, you will see useful tips on machine learning best practices and troubleshooting. Finally, you will learn how to apply your models on large datasets of millions of images.
Table of Contents (17 chapters)
Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
Index

Model Initialization


As we add more and more layers to our models, it becomes harder and harder to train them using backpropagation. The error values that are passed back through the model to update weights become smaller and smaller the deeper we go. This is known as the vanishing gradient problem.

As a result, an important thing to look at before we start training our models is what values we initialize our weights to. A bad initialization can make the model very slow to converge, or perhaps never converge at all.

Although we do not know exactly what values our weights will end up with after training, one might reasonably expect that about half of them will be positive values and half will be negative.

Do not initialize all weights with zeros

We might be inclined to now think that setting all our weights to zero will achieve maximum symmetry. However, this is actually a very bad idea, and our model will never learn anything. This is because when you do a forward pass, every neuron will produce...