Book Image

The Deep Learning with Keras Workshop

By : Matthew Moocarme, Mahla Abdolahnejad, Ritesh Bhagwat
1 (1)
Book Image

The Deep Learning with Keras Workshop

1 (1)
By: Matthew Moocarme, Mahla Abdolahnejad, Ritesh Bhagwat

Overview of this book

New experiences can be intimidating, but not this one! This beginner’s guide to deep learning is here to help you explore deep learning from scratch with Keras, and be on your way to training your first ever neural networks. What sets Keras apart from other deep learning frameworks is its simplicity. With over two hundred thousand users, Keras has a stronger adoption in industry and the research community than any other deep learning framework. The Deep Learning with Keras Workshop starts by introducing you to the fundamental concepts of machine learning using the scikit-learn package. After learning how to perform the linear transformations that are necessary for building neural networks, you'll build your first neural network with the Keras library. As you advance, you'll learn how to build multi-layer neural networks and recognize when your model is underfitting or overfitting to the training data. With the help of practical exercises, you’ll learn to use cross-validation techniques to evaluate your models and then choose the optimal hyperparameters to fine-tune their performance. Finally, you’ll explore recurrent neural networks and learn how to train them to predict values in sequential data. By the end of this book, you'll have developed the skills you need to confidently train your own neural network models.
Table of Contents (11 chapters)
Preface

Fine-Tuning a Pre-Trained Network

Fine-tuning means tweaking our neural network in such a way that it becomes more relevant to the task at hand. We can freeze some of the initial layers of the network so that we don't lose information stored in those layers. The information stored there is generic and useful. However, if we can freeze those layers while our classifier is learning and then unfreeze them, we can tweak them a little so that they fit even better to the problem at hand. Suppose we have a pre-trained network that identifies animals. If we want to identify specific animals, such as dogs and cats, we can tweak the layers a little bit so that they can learn what dogs and cats look like. This is like using the whole pre-trained network and then adding a new layer that consists of images of dogs and cats. We will be doing a similar activity by using a pre-built network and adding a classifier on top of it, which will be trained on pictures of dogs and cats.

There is...