Book Image

Natural Language Processing Fundamentals

By : Sohom Ghosh, Dwight Gunning
Book Image

Natural Language Processing Fundamentals

By: Sohom Ghosh, Dwight Gunning

Overview of this book

If NLP hasn't been your forte, Natural Language Processing Fundamentals will make sure you set off to a steady start. This comprehensive guide will show you how to effectively use Python libraries and NLP concepts to solve various problems. You'll be introduced to natural language processing and its applications through examples and exercises. This will be followed by an introduction to the initial stages of solving a problem, which includes problem definition, getting text data, and preparing it for modeling. With exposure to concepts like advanced natural language processing algorithms and visualization techniques, you'll learn how to create applications that can extract information from unstructured data and present it as impactful visuals. Although you will continue to learn NLP-based techniques, the focus will gradually shift to developing useful applications. In these sections, you'll understand how to apply NLP techniques to answer questions as can be used in chatbots. By the end of this book, you'll be able to accomplish a varied range of assignments ranging from identifying the most suitable type of NLP task for solving a problem to using a tool like spacy or gensim for performing sentiment analysis. The book will easily equip you with the knowledge you need to build applications that interpret human language.
Table of Contents (10 chapters)

3. Developing a Text classifier

Activity 5: Developing End-to-End Text Classifiers

Solution

Let's build an end-to-end classifier that helps classify Wikipedia comments. Follow these steps to implement this activity:

  1. Open a Jupyter notebook.
  2. Insert a new cell and add the following code to import the necessary packages:
    import pandas as pd
    import seaborn as sns
    import matplotlib.pyplot as plt
    %matplotlib inline
    import re
    import string
    from nltk import word_tokenize
    from nltk.corpus import stopwords
    from nltk.stem import WordNetLemmatizer
    from sklearn.feature_extraction.text import TfidfVectorizer
    from sklearn.model_selection import train_test_split
    from pylab import *
    import nltk
    import warnings
    warnings.filterwarnings('ignore')
    from sklearn.metrics import accuracy_score,roc_curve,classification_report,confusion_matrix,precision_recall_curve,auc
  3. In this step, we will read a data file. It has two columns: comment_text and toxic. The comment_text column...