Book Image

Python Natural Language Processing Cookbook

By : Zhenya Antić
Book Image

Python Natural Language Processing Cookbook

By: Zhenya Antić

Overview of this book

Python is the most widely used language for natural language processing (NLP) thanks to its extensive tools and libraries for analyzing text and extracting computer-usable data. This book will take you through a range of techniques for text processing, from basics such as parsing the parts of speech to complex topics such as topic modeling, text classification, and visualization. Starting with an overview of NLP, the book presents recipes for dividing text into sentences, stemming and lemmatization, removing stopwords, and parts of speech tagging to help you to prepare your data. You’ll then learn ways of extracting and representing grammatical information, such as dependency parsing and anaphora resolution, discover different ways of representing the semantics using bag-of-words, TF-IDF, word embeddings, and BERT, and develop skills for text classification using keywords, SVMs, LSTMs, and other techniques. As you advance, you’ll also see how to extract information from text, implement unsupervised and supervised techniques for topic modeling, and perform topic modeling of short texts, such as tweets. Additionally, the book shows you how to develop chatbots using NLTK and Rasa and visualize text data. By the end of this NLP book, you’ll have developed the skills to use a powerful set of tools for text processing.
Table of Contents (10 chapters)

Finding references – anaphora resolution

When we work on problems of extracting entities and relations from text (see the Extracting entities and relations recipe), we are faced with real text, and many of our entities might end up being extracted as pronouns, such as she or him. In order to tackle this issue, we need to perform anaphora resolution, or the process of substituting the pronouns with their referents.

Getting ready

For this task, we will be using a spaCy extension written by Hugging Face called neuralcoref (see https://github.com/huggingface/neuralcoref). As the name suggests, it uses neural networks to resolve pronouns. To install the package, use the following command:

pip install neuralcoref

How to do it…

Your steps should be formatted like so:

  1. Import spacy and neuralcoref:
    import spacy
    import neuralcoref
  2. Load the spaCy engine and add neuralcoref to its pipeline:
    nlp = spacy.load('en_core_web_sm')
    neuralcoref.add_to_pipe(nlp)
  3. We will process the following short text:
    text = "Earlier this year, Olga appeared on a new song. She was featured on one of the tracks. The singer is assuring that her next album will be worth the wait."
  4. Now that neuralcoref is part of the pipeline, we just process the text using spaCy and then output the result:
    doc = nlp(text)
    print(doc._.coref_resolved)

    The output will be as follows:

    Earlier this year, Olga appeared on a new song. Olga was featured on one of the tracks. Olga is assuring that Olga next album will be worth the wait.

How it works…

In step 1, we import the necessary packages. In step 2, we load the spacy engine and then add neuralcoref to its pipeline. In step 3, we initialize the text variable with the short text we will be using.

In step 4, we use the spacy engine to process the text and then print out the text with the pronouns resolved. You can see that the pronouns she and her, and even the phrase The singer, were all correctly substituted with the name Olga.

The neuralcoref package uses custom spacy attributes that are set by using an underscore and the attribute name. The coref_resolved variable is a custom attribute that is set on a Doc object. To learn more about spaCy custom attributes, see https://spacy.io/usage/processing-pipelines#custom-components-attributes.

There's more…

The neuralcoref package did a good job of recognizing different references to Olga in the previous section. However, if we use an unusual name, it might not work correctly. Here, we are using an example from the Hugging Face GitHub:

  1. Let's use the following short text:
    text = "Deepika has a dog. She loves him. The movie star has always been fond of animals."
  2. Upon processing this text using the preceding code, we get the following output:
    Deepika has a dog. Deepika loves Deepika. Deepika has always been fond of animals.
  3. Because the name Deepika is an unusual name, the model has trouble figuring out whether this person is a man or a woman and resolves the pronoun him to Deepika, although it is incorrect. In order to solve this problem, we can help it by characterizing who Deepika actually is. We will add neuralcoref to the spacy pipe, as follows:
    neuralcoref.add_to_pipe(nlp, conv_dict={'Deepika': ['woman']})
  4. Now, let's process the result, as we did previously:
    doc = nlp(text)
    print(doc._.coref_resolved)

    The output will be as follows:

    Deepika has a dog. Deepika loves a dog. Deepika has always been fond of animals.

    Once we give the coreference resolution module more information, it gives the correct output.