Book Image

The TensorFlow Workshop

By : Matthew Moocarme, Abhranshu Bagchi, Anthony So, Anthony Maddalone
Book Image

The TensorFlow Workshop

By: Matthew Moocarme, Abhranshu Bagchi, Anthony So, Anthony Maddalone

Overview of this book

Getting to grips with tensors, deep learning, and neural networks can be intimidating and confusing for anyone, no matter their experience level. The breadth of information out there, often written at a very high level and aimed at advanced practitioners, can make getting started even more challenging. If this sounds familiar to you, The TensorFlow Workshop is here to help. Combining clear explanations, realistic examples, and plenty of hands-on practice, it’ll quickly get you up and running. You’ll start off with the basics – learning how to load data into TensorFlow, perform tensor operations, and utilize common optimizers and activation functions. As you progress, you’ll experiment with different TensorFlow development tools, including TensorBoard, TensorFlow Hub, and Google Colab, before moving on to solve regression and classification problems with sequential models. Building on this solid foundation, you’ll learn how to tune models and work with different types of neural network, getting hands-on with real-world deep learning applications such as text encoding, temperature forecasting, image augmentation, and audio processing. By the end of this deep learning book, you’ll have the skills, knowledge, and confidence to tackle your own ambitious deep learning projects with TensorFlow.
Table of Contents (13 chapters)
Preface

Transfer Learning

In the previous chapter, you got hands-on practice training different CNN models for image classification purposes. Even though you achieved good results, the models took quite some time to learn the relevant parameters. If you kept training the models, you could have achieved even better results. Using graphical processing units (GPUs) can shorten the training time, but it will still take a bit of time, especially for bigger or more complex datasets.

Deep learning researchers have published their work for the benefit of the community. Everyone can benefit by taking existing model architectures and customizing them, rather than designing architectures from scratch. More than this though, researchers also share the weights of their models. You can then not only reuse an architecture but also leverage all the training performed on it. This is what transfer learning is about. By reusing pre-trained models, you don't have to start from scratch. These models are...