Book Image

Recurrent Neural Networks with Python Quick Start Guide

By : Simeon Kostadinov
Book Image

Recurrent Neural Networks with Python Quick Start Guide

By: Simeon Kostadinov

Overview of this book

Developers struggle to find an easy-to-follow learning resource for implementing Recurrent Neural Network (RNN) models. RNNs are the state-of-the-art model in deep learning for dealing with sequential data. From language translation to generating captions for an image, RNNs are used to continuously improve results. This book will teach you the fundamentals of RNNs, with example applications in Python and the TensorFlow library. The examples are accompanied by the right combination of theoretical knowledge and real-world implementations of concepts to build a solid foundation of neural network modeling. Your journey starts with the simplest RNN model, where you can grasp the fundamentals. The book then builds on this by proposing more advanced and complex algorithms. We use them to explain how a typical state-of-the-art RNN model works. From generating text to building a language translator, we show how some of today's most powerful AI applications work under the hood. After reading the book, you will be confident with the fundamentals of RNNs, and be ready to pursue further study, along with developing skills in this exciting field.
Table of Contents (8 chapters)

Building a conversation

This step is really similar to the training one. The first difference is that we don't make any evaluation of our predictions, but instead use the input to generate the results. The second difference is that we use the already trained set of variables to yield this result. You will see how it is done later in this chapter. 

To make things clearer, we first initialize a new sequence-to-sequence model. Its purpose is to use the already trained weights and biases and make predictions based on different sets of inputs. We only have an encoder and decoder sequence, where the encoder one is an input sentence and the decoder sequence is fed one word at a time. We define the new model as follows:

encode_seqs2 = tf.placeholder(dtype=tf.int64, shape=[1, None], name="encode_seqs")
decode_seqs2 = tf.placeholder(dtype=tf.int64, shape=[1, None], name...