Book Image

Python Natural Language Processing Cookbook

By : Zhenya Antić
Book Image

Python Natural Language Processing Cookbook

By: Zhenya Antić

Overview of this book

Python is the most widely used language for natural language processing (NLP) thanks to its extensive tools and libraries for analyzing text and extracting computer-usable data. This book will take you through a range of techniques for text processing, from basics such as parsing the parts of speech to complex topics such as topic modeling, text classification, and visualization. Starting with an overview of NLP, the book presents recipes for dividing text into sentences, stemming and lemmatization, removing stopwords, and parts of speech tagging to help you to prepare your data. You’ll then learn ways of extracting and representing grammatical information, such as dependency parsing and anaphora resolution, discover different ways of representing the semantics using bag-of-words, TF-IDF, word embeddings, and BERT, and develop skills for text classification using keywords, SVMs, LSTMs, and other techniques. As you advance, you’ll also see how to extract information from text, implement unsupervised and supervised techniques for topic modeling, and perform topic modeling of short texts, such as tweets. Additionally, the book shows you how to develop chatbots using NLTK and Rasa and visualize text data. By the end of this NLP book, you’ll have developed the skills to use a powerful set of tools for text processing.
Table of Contents (10 chapters)

Combining similar words – lemmatization

A similar technique to stemming is lemmatization. The difference is that lemmatization provides us with a real word, that is, its canonical form. For example, the lemma of the word cats is cat, and the lemma for the word ran is run.

Getting ready

We will be using the NLTK package for this recipe.

How to do it…

The NLTK package includes a lemmatizer module based on the WordNet database.

Here is how to use it:

  1. Import the NLTK WordNet lemmatizer:
    from nltk.stem import WordNetLemmatizer
  2. Initialize lemmatizer:
    lemmatizer = WordNetLemmatizer()
  3. Initialize a list with words to lemmatize:
    words = ['duck', 'geese', 'cats', 'books']
  4. Lemmatize the words:
    lemmatized_words = [lemmatizer.lemmatize(word) for word in words]
  5. The result will be as follows:
    ['duck', 'goose', 'cat', 'book']

How it works…

In step 1, we import...