Book Image

Natural Language Processing with TensorFlow

By : Motaz Saad, Thushan Ganegedara
Book Image

Natural Language Processing with TensorFlow

By: Motaz Saad, Thushan Ganegedara

Overview of this book

Natural language processing (NLP) supplies the majority of data available to deep learning applications, while TensorFlow is the most important deep learning framework currently available. Natural Language Processing with TensorFlow brings TensorFlow and NLP together to give you invaluable tools to work with the immense volume of unstructured data in today’s data streams, and apply these tools to specific NLP tasks. Thushan Ganegedara starts by giving you a grounding in NLP and TensorFlow basics. You'll then learn how to use Word2vec, including advanced extensions, to create word embeddings that turn sequences of words into vectors accessible to deep learning algorithms. Chapters on classical deep learning algorithms, like convolutional neural networks (CNN) and recurrent neural networks (RNN), demonstrate important NLP tasks as sentence classification and language generation. You will learn how to apply high-performance RNN models, like long short-term memory (LSTM) cells, to NLP tasks. You will also explore neural machine translation and implement a neural machine translator. After reading this book, you will gain an understanding of NLP and you'll have the skills to apply TensorFlow in deep learning NLP applications, and how to perform specific NLP tasks.
Table of Contents (16 chapters)
Natural Language Processing with TensorFlow
Contributors
Preface
Index

Summary


This chapter was aimed at learning the current trends in NLP and learning the future directions that NLP is being driven to. Though it is a very broad topic, we discussed some of the very recent advancements that have been made in NLP. As current trends, we first looked at the advancements being made with regard to word embeddings. We saw that much more accurate embeddings with richer interpretations (for example, probabilistic) are emerging. Then we looked into improvements that have been made in machine translation, as it is one of the most sought after areas in NLP. We saw that better attention mechanisms and better MT models capable of producing increasingly more realistic translations are both emerging.

We then looked at some of the novel research in NLP that is taking place (mostly in 2017). First we investigated the penetration of NLP into other fields: computer vision, reinforcement learning, and the generative adversarial models. We looked at how NLP systems are being improved...