Book Image

Neural Networks with Keras Cookbook

By : V Kishore Ayyadevara
Book Image

Neural Networks with Keras Cookbook

By: V Kishore Ayyadevara

Overview of this book

This book will take you from the basics of neural networks to advanced implementations of architectures using a recipe-based approach. We will learn about how neural networks work and the impact of various hyper parameters on a network's accuracy along with leveraging neural networks for structured and unstructured data. Later, we will learn how to classify and detect objects in images. We will also learn to use transfer learning for multiple applications, including a self-driving car using Convolutional Neural Networks. We will generate images while leveraging GANs and also by performing image encoding. Additionally, we will perform text analysis using word vector based techniques. Later, we will use Recurrent Neural Networks and LSTM to implement chatbot and Machine Translation systems. Finally, you will learn about transcribing images, audio, and generating captions and also use Deep Q-learning to build an agent that plays Space Invaders game. By the end of this book, you will have developed the skills to choose and customize multiple neural network architectures for various deep learning problems you might encounter.
Table of Contents (18 chapters)

Introduction

In the traditional approach of solving text-related problems, we would one-hot encode the word. However, if the dataset has thousands of unique words, the resulting one-hot-encoded vector would have thousands of dimensions, which is likely to result in computation issues. Additionally, similar words will not have similar vectors in this scenario. Word2Vec is an approach that helps us to achieve similar vectors for similar words.

To understand how Word2Vec is useful, let's explore the following problem.

Let's say we have two input sentences:

Intuitively, we know that enjoy and like are similar words. However, in traditional text mining, when we one-hot encode the words, our output looks as follows:

Notice that one-hot encoding results in each word being assigned a column. The major issue with one-hot encoding such as this is that the Eucledian distance...