Book Image

The Handbook of NLP with Gensim

By : Chris Kuo
Book Image

The Handbook of NLP with Gensim

By: Chris Kuo

Overview of this book

Navigating the terrain of NLP research and applying it practically can be a formidable task made easy with The Handbook of NLP with Gensim. This book demystifies NLP and equips you with hands-on strategies spanning healthcare, e-commerce, finance, and more to enable you to leverage Gensim in real-world scenarios. You’ll begin by exploring motives and techniques for extracting text information like bag-of-words, TF-IDF, and word embeddings. This book will then guide you on topic modeling using methods such as Latent Semantic Analysis (LSA) for dimensionality reduction and discovering latent semantic relationships in text data, Latent Dirichlet Allocation (LDA) for probabilistic topic modeling, and Ensemble LDA to enhance topic modeling stability and accuracy. Next, you’ll learn text summarization techniques with Word2Vec and Doc2Vec to build the modeling pipeline and optimize models using hyperparameters. As you get acquainted with practical applications in various industries, this book will inspire you to design innovative projects. Alongside topic modeling, you’ll also explore named entity handling and NER tools, modeling procedures, and tools for effective topic modeling applications. By the end of this book, you’ll have mastered the techniques essential to create applications with Gensim and integrate NLP into your business processes.
Table of Contents (24 chapters)
1
Part 1: NLP Basics
5
Part 2: Latent Semantic Analysis/Latent Semantic Indexing
9
Part 3: Word2Vec and Doc2Vec
12
Part 4: Topic Modeling with Latent Dirichlet Allocation
18
Part 5: Comparison and Applications

Summary

In this chapter, we learned about the basic forms of text representation, including BoW, Bag-of-N-grams, and TF-IDF methods, to represent raw text. The advantage of BoW is its simplicity. The Bag-of-N-grams method enhances BoW because it captures phrases. TF-IDF can enhance BoW by measuring the importance of a word in a document relative to the entire corpus. Words that are rare in a document will have a high score in the TF-IDF vector. The common disadvantage of BoW, Bag-of-N-grams, and TF-IDF is they create a very sparse matrix. Also, they do not take into consideration the order of words in an article. In this chapter, we also learned how to perform BoW and TF-IDF in Gensim, scikit-learn, and NLTK.

As we become more hands-on with texts, we'll need to deal with words in uppercase or lowercase, or documents with punctuation, numbers, and special characters. We'll also need to distinguish meaningful words from common words and annotate them with grammatical notations...