Book Image

Apache Spark Deep Learning Cookbook

By : Ahmed Sherif, Amrith Ravindra
Book Image

Apache Spark Deep Learning Cookbook

By: Ahmed Sherif, Amrith Ravindra

Overview of this book

Organizations these days need to integrate popular big data tools such as Apache Spark with highly efficient deep learning libraries if they’re looking to gain faster and more powerful insights from their data. With this book, you’ll discover over 80 recipes to help you train fast, enterprise-grade, deep learning models on Apache Spark. Each recipe addresses a specific problem, and offers a proven, best-practice solution to difficulties encountered while implementing various deep learning algorithms in a distributed environment. The book follows a systematic approach, featuring a balance of theory and tips with best practice solutions to assist you with training different types of neural networks such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs). You’ll also have access to code written in TensorFlow and Keras that you can run on Spark to solve a variety of deep learning problems in computer vision and natural language processing (NLP), or tweak to tackle other problems encountered in deep learning. By the end of this book, you'll have the skills you need to train and deploy state-of-the-art deep learning models on Apache Spark.
Table of Contents (21 chapters)
Title Page
Copyright and Credits
Packt Upsell
Foreword
Contributors
Preface
Index

Training and saving the LSTM model


You can now train a statistical language model from the prepared data.

The model that will be trained is a neural language model. It has a few unique characteristics:

  • It uses a distributed representation for words so that different words with similar meanings will have a similar representation
  • It learns the representation at the same time as learning the model
  • It learns to predict the probability for the next word using the context of the previous 50 words

Specifically, you will use an Embedding Layer to learn the representation of words, and a Long Short-Term Memory (LSTM) recurrent neural network to learn to predict words based on their context.

Getting ready

The learned embedding needs to know the size of the vocabulary and the length of input sequences as previously discussed. It also has a parameter to specify how many dimensions will be used to represent each word. That is the size of the embedding vector space.

Common values are 50, 100, and 300. We will...