Book Image

Neural Networks with Keras Cookbook

By : V Kishore Ayyadevara
Book Image

Neural Networks with Keras Cookbook

By: V Kishore Ayyadevara

Overview of this book

This book will take you from the basics of neural networks to advanced implementations of architectures using a recipe-based approach. We will learn about how neural networks work and the impact of various hyper parameters on a network's accuracy along with leveraging neural networks for structured and unstructured data. Later, we will learn how to classify and detect objects in images. We will also learn to use transfer learning for multiple applications, including a self-driving car using Convolutional Neural Networks. We will generate images while leveraging GANs and also by performing image encoding. Additionally, we will perform text analysis using word vector based techniques. Later, we will use Recurrent Neural Networks and LSTM to implement chatbot and Machine Translation systems. Finally, you will learn about transcribing images, audio, and generating captions and also use Deep Q-learning to build an agent that plays Space Invaders game. By the end of this book, you will have developed the skills to choose and customize multiple neural network architectures for various deep learning problems you might encounter.
Table of Contents (18 chapters)

Performing non-max suppression

So far, in the previous section, we have only considered the candidates that do not have a background, and further considered the candidate that has the highest probability of the object of interest. However, this fails in the scenario where there are multiple objects present in an image.

In this section, we will discuss the ways to shortlist the candidate region proposals so that we are in a position to extract as many objects as possible within the image.

Getting ready

The strategy we adopt to perform NMS is as follows:

  • Extract the region proposals from an image
  • Reshape the region proposals and predict the object that is contained in the image
  • If the object is non-background, we shall keep...