Book Image

Deep Learning for Natural Language Processing

By : Karthiek Reddy Bokka, Shubhangi Hora, Tanuj Jain, Monicah Wambugu
Book Image

Deep Learning for Natural Language Processing

By: Karthiek Reddy Bokka, Shubhangi Hora, Tanuj Jain, Monicah Wambugu

Overview of this book

Applying deep learning approaches to various NLP tasks can take your computational algorithms to a completely new level in terms of speed and accuracy. Deep Learning for Natural Language Processing starts by highlighting the basic building blocks of the natural language processing domain. The book goes on to introduce the problems that you can solve using state-of-the-art neural network models. After this, delving into the various neural network architectures and their specific areas of application will help you to understand how to select the best model to suit your needs. As you advance through this deep learning book, you’ll study convolutional, recurrent, and recursive neural networks, in addition to covering long short-term memory networks (LSTM). Understanding these networks will help you to implement their models using Keras. In later chapters, you will be able to develop a trigger word detection application using NLP techniques such as attention model and beam search. By the end of this book, you will not only have sound knowledge of natural language processing, but also be able to select the best text preprocessing and neural network models to solve a number of NLP issues.
Table of Contents (11 chapters)

RNNs

Recurrent often means occurring repeatedly. The recurrent part of RNNs simply means that the same task is done over all the inputs in the input sequence (for RNNs, we give a sequence of timesteps as the input sequence). One main difference between feed forward networks and RNNs is that RNNs have memory elements called states that capture the information from the previous inputs. So, in this architecture, the current output not only depends on the current input, but also on the current state, which takes into account past inputs.

RNNs are trained by sequences of inputs rather than a single input; similarly, we can consider each input to an RNN as a sequence of timesteps. The state elements in RNNs contain information about past inputs to process the current input sequence.

Figure 5.3: RNN structure

For each input in the input sequence, the RNN gets a state, calculates its output, and sends its state to the next input in the sequence. The same set of tasks is repeated for...