Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying Hands-On Convolutional Neural Networks with TensorFlow
  • Table Of Contents Toc
Hands-On Convolutional Neural Networks with TensorFlow

Hands-On Convolutional Neural Networks with TensorFlow

By : Araujo, Zafar, Tzanidou, Burton, Patel
4 (3)
close
close
Hands-On Convolutional Neural Networks with TensorFlow

Hands-On Convolutional Neural Networks with TensorFlow

4 (3)
By: Araujo, Zafar, Tzanidou, Burton, Patel

Overview of this book

Convolutional Neural Networks (CNN) are one of the most popular architectures used in computer vision apps. This book is an introduction to CNNs through solving real-world problems in deep learning while teaching you their implementation in popular Python library - TensorFlow. By the end of the book, you will be training CNNs in no time! We start with an overview of popular machine learning and deep learning models, and then get you set up with a TensorFlow development environment. This environment is the basis for implementing and training deep learning models in later chapters. Then, you will use Convolutional Neural Networks to work on problems such as image classification, object detection, and semantic segmentation. After that, you will use transfer learning to see how these models can solve other deep learning problems. You will also get a taste of implementing generative models such as autoencoders and generative adversarial networks. Later on, you will see useful tips on machine learning best practices and troubleshooting. Finally, you will learn how to apply your models on large datasets of millions of images.
Table of Contents (12 chapters)
close
close

Distributed computing in TensorFlow


In this section, you will learn how to distribute computation in TensorFlow; the importance of knowing how to do this is highlighted as follows:

  • Run more experiments in parallel (namely, finding hyperparameters, for example, gridsearch)
  • Distribute model training over multiple GPUs (on multiple servers) to reduce training time

One famous use case was when Facebook published a paper that was able to train ImageNet in 1 hour (instead of weeks). Basically, it trained a ResNet-50 on ImageNet on 256 GPUs, distributed on 32 servers, with a batch size of 8,192 images.

Model/data parallelism

 

There are mainly two ways to achieve parallelism and scale your task in multiple servers:

  • Model Parallelism: When your model does not fit on the GPU, you need to compute layers on different servers.
  • Data Parallelism: When we have the same model distributed on different servers but handling different batches, so each server will have a different gradient and we need some sort of...
CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
Hands-On Convolutional Neural Networks with TensorFlow
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon