Book Image

Python Natural Language Processing Cookbook

By : Zhenya Antić
Book Image

Python Natural Language Processing Cookbook

By: Zhenya Antić

Overview of this book

Python is the most widely used language for natural language processing (NLP) thanks to its extensive tools and libraries for analyzing text and extracting computer-usable data. This book will take you through a range of techniques for text processing, from basics such as parsing the parts of speech to complex topics such as topic modeling, text classification, and visualization. Starting with an overview of NLP, the book presents recipes for dividing text into sentences, stemming and lemmatization, removing stopwords, and parts of speech tagging to help you to prepare your data. You’ll then learn ways of extracting and representing grammatical information, such as dependency parsing and anaphora resolution, discover different ways of representing the semantics using bag-of-words, TF-IDF, word embeddings, and BERT, and develop skills for text classification using keywords, SVMs, LSTMs, and other techniques. As you advance, you’ll also see how to extract information from text, implement unsupervised and supervised techniques for topic modeling, and perform topic modeling of short texts, such as tweets. Additionally, the book shows you how to develop chatbots using NLTK and Rasa and visualize text data. By the end of this NLP book, you’ll have developed the skills to use a powerful set of tools for text processing.
Table of Contents (10 chapters)

Using LSTMs for supervised text classification

In this recipe, we will build a deep learning LSTM classifier for the BBC News dataset. There is not enough data to build a great classifier, but we will use the same dataset for comparison. By the end of this recipe, you will have a complete LSTM classifier that is trained and can be tested on new inputs.

Getting ready

In order to build the recipe, we need to install tensorflow and keras:

pip install tensorflow
pip install keras

In this recipe, we will use the same BBC dataset to create an LSTM classification model.

How to do it…

The general structure of the training is similar to plain machine learning model training, where we clean the data, create the dataset, and split it into training and testing datasets. We then train a model and test it on unseen data. The particulars of the training are different for deep learning as opposed to statistical machine learning, such as SVMs. The steps for this recipe are...