Book Image

The Handbook of NLP with Gensim

By : Chris Kuo
Book Image

The Handbook of NLP with Gensim

By: Chris Kuo

Overview of this book

Navigating the terrain of NLP research and applying it practically can be a formidable task made easy with The Handbook of NLP with Gensim. This book demystifies NLP and equips you with hands-on strategies spanning healthcare, e-commerce, finance, and more to enable you to leverage Gensim in real-world scenarios. You’ll begin by exploring motives and techniques for extracting text information like bag-of-words, TF-IDF, and word embeddings. This book will then guide you on topic modeling using methods such as Latent Semantic Analysis (LSA) for dimensionality reduction and discovering latent semantic relationships in text data, Latent Dirichlet Allocation (LDA) for probabilistic topic modeling, and Ensemble LDA to enhance topic modeling stability and accuracy. Next, you’ll learn text summarization techniques with Word2Vec and Doc2Vec to build the modeling pipeline and optimize models using hyperparameters. As you get acquainted with practical applications in various industries, this book will inspire you to design innovative projects. Alongside topic modeling, you’ll also explore named entity handling and NER tools, modeling procedures, and tools for effective topic modeling applications. By the end of this book, you’ll have mastered the techniques essential to create applications with Gensim and integrate NLP into your business processes.
Table of Contents (24 chapters)
1
Part 1: NLP Basics
5
Part 2: Latent Semantic Analysis/Latent Semantic Indexing
9
Part 3: Word2Vec and Doc2Vec
12
Part 4: Topic Modeling with Latent Dirichlet Allocation
18
Part 5: Comparison and Applications

Understanding BERT

BERT was published in 2019 by Devlin et al. based on the Transformer architecture [3]. It soon became the prevailing model in NLP. It helps the Transformer model to become even smarter. BERT teaches the Transformer to learn from the words before and after each word, so it knows the context and order better. This helps the Transformer understand tricky things such as jokes or words with multiple meanings, making it excellent at understanding all kinds of text, such as chatting or reading books.

How does it do that? BERT removes the unidirectionality constraint in the Transformer model and uses a masked language model (MLM) that randomly masks some of the input tokens. Since some tokens are masked, MLM has to predict the original vocabulary of the masked word based on its before and after context. To use the context before and after the masked word, MLM fits the data by using both left-to-right and right-to-left contexts. This is why it is called bidirectional,...