Book Image

Advanced Natural Language Processing with TensorFlow 2

By : Ashish Bansal, Tony Mullen
Book Image

Advanced Natural Language Processing with TensorFlow 2

By: Ashish Bansal, Tony Mullen

Overview of this book

Recently, there have been tremendous advances in NLP, and we are now moving from research labs into practical applications. This book comes with a perfect blend of both the theoretical and practical aspects of trending and complex NLP techniques. The book is focused on innovative applications in the field of NLP, language generation, and dialogue systems. It helps you apply the concepts of pre-processing text using techniques such as tokenization, parts of speech tagging, and lemmatization using popular libraries such as Stanford NLP and SpaCy. You will build Named Entity Recognition (NER) from scratch using Conditional Random Fields and Viterbi Decoding on top of RNNs. The book covers key emerging areas such as generating text for use in sentence completion and text summarization, bridging images and text by generating captions for images, and managing dialogue aspects of chatbots. You will learn how to apply transfer learning and fine-tuning using TensorFlow 2. Further, it covers practical techniques that can simplify the labelling of textual data. The book also has a working code that is adaptable to your use cases for each tech piece. By the end of the book, you will have an advanced knowledge of the tools, techniques and deep learning architecture used to solve complex NLP problems.
Table of Contents (13 chapters)
11
Other Books You May Enjoy
12
Index

Summary

In this chapter, we worked through the basics of NLP, including collecting and labeling training data, tokenization, stop word removal, case normalization, POS tagging, stemming, and lemmatization. Some vagaries of these in languages such as Japanese and Russian were also covered. Using a variety of features derived from these approaches, we trained a model to classify spam messages, where the messages had a combination of English and Bahasa Indonesian words. This got us to a model with 94% accuracy.

However, the major challenge in using the content of the messages was in defining a way to represent words as vectors such that computations could be performed on them. We started with a simple count-based vectorization scheme and then graduated to a more sophisticated TF-IDF approach, both of which produced sparse vectors. This TF-IDF approach gave a model with 98%+ accuracy in the spam detection task.

Finally, we saw a contemporary method of generating dense word embeddings, called Word2Vec. This method, though a few years old, is still very relevant in many production applications. Once the word embeddings are generated, they can be cached for inference and that makes an ML model using these embeddings run with relatively low latency.

We used a very basic deep learning model for solving the SMS spam classification task. Like how Convolutional Neural Networks (CNNs) are the predominant architecture in computer vision, Recurrent Neural Networks (RNNs), especially those based on Long Short-Term Memory (LSTM) and Bi-directional LSTMs (BiLSTMs), are most commonly used to build NLP models. In the next chapter, we cover the structure of LSTMs and build a sentiment analysis model using BiLSTMs. These models will be used extensively in creative ways to solve different NLP problems in future chapters.