Book Image

Python Natural Language Processing Cookbook

By : Zhenya Antić
Book Image

Python Natural Language Processing Cookbook

By: Zhenya Antić

Overview of this book

Python is the most widely used language for natural language processing (NLP) thanks to its extensive tools and libraries for analyzing text and extracting computer-usable data. This book will take you through a range of techniques for text processing, from basics such as parsing the parts of speech to complex topics such as topic modeling, text classification, and visualization. Starting with an overview of NLP, the book presents recipes for dividing text into sentences, stemming and lemmatization, removing stopwords, and parts of speech tagging to help you to prepare your data. You’ll then learn ways of extracting and representing grammatical information, such as dependency parsing and anaphora resolution, discover different ways of representing the semantics using bag-of-words, TF-IDF, word embeddings, and BERT, and develop skills for text classification using keywords, SVMs, LSTMs, and other techniques. As you advance, you’ll also see how to extract information from text, implement unsupervised and supervised techniques for topic modeling, and perform topic modeling of short texts, such as tweets. Additionally, the book shows you how to develop chatbots using NLTK and Rasa and visualize text data. By the end of this NLP book, you’ll have developed the skills to use a powerful set of tools for text processing.
Table of Contents (10 chapters)

Technical requirements

Throughout this book, I will be showing examples that were run using an Anaconda installation of Python 3.6.10. To install Anaconda, follow the instructions here: https://docs.anaconda.com/anaconda/install/.

After you have installed Anaconda, use it to create a virtual environment:

conda create -n nlp_book python=3.6.10 anaconda
activate nlp_book

Then, install spaCy 2.3.0 and NLTK 3.4.5:

pip install nltk
pip install spacy

After you have installed spaCy and NLTK, install the models needed to use them. For spaCy, use this:

python -m spacy download en_core_web_sm

Use Python commands to download the necessary model for NLTK:

python
>>> import nltk
>>> nltk.download('punkt')

All the code that is in this book can be found in the book's GitHub repository: https://github.com/PacktPublishing/Python-Natural-Language-Processing-Cookbook.

Important note

The files in the book's GitHub repository should...