Book Image

Python Natural Language Processing Cookbook

By : Zhenya Antić
Book Image

Python Natural Language Processing Cookbook

By: Zhenya Antić

Overview of this book

Python is the most widely used language for natural language processing (NLP) thanks to its extensive tools and libraries for analyzing text and extracting computer-usable data. This book will take you through a range of techniques for text processing, from basics such as parsing the parts of speech to complex topics such as topic modeling, text classification, and visualization. Starting with an overview of NLP, the book presents recipes for dividing text into sentences, stemming and lemmatization, removing stopwords, and parts of speech tagging to help you to prepare your data. You’ll then learn ways of extracting and representing grammatical information, such as dependency parsing and anaphora resolution, discover different ways of representing the semantics using bag-of-words, TF-IDF, word embeddings, and BERT, and develop skills for text classification using keywords, SVMs, LSTMs, and other techniques. As you advance, you’ll also see how to extract information from text, implement unsupervised and supervised techniques for topic modeling, and perform topic modeling of short texts, such as tweets. Additionally, the book shows you how to develop chatbots using NLTK and Rasa and visualize text data. By the end of this NLP book, you’ll have developed the skills to use a powerful set of tools for text processing.
Table of Contents (10 chapters)

Using BERT for sentiment analysis

In this recipe, we will fine-tune a pretrained Bidirectional Encoder Representations from Transformers (BERT) model to classify the Twitter data from the previous recipe. We will load the model, encode the data, and then fine-tune the model with the data. We will then use it on unseen examples.

Getting ready

We will use the Hugging Face transformers library for this recipe. To install the package, run the following command:

pip install transformers

We will use the same Twitter dataset as in the previous recipe.

How to do it…

BERT models are a little more complicated than the models we were using in previous recipes, but the general idea is the same: encode the data and train the model. The one difference is that in this recipe, the model is already pretrained and we will be just fine-tuning it with our data. The steps for this recipe are as follows:

  1. Import the necessary functions and packages:
    import pandas as pd
    import...