Book Image

Natural Language Processing Fundamentals

By : Sohom Ghosh, Dwight Gunning
Book Image

Natural Language Processing Fundamentals

By: Sohom Ghosh, Dwight Gunning

Overview of this book

If NLP hasn't been your forte, Natural Language Processing Fundamentals will make sure you set off to a steady start. This comprehensive guide will show you how to effectively use Python libraries and NLP concepts to solve various problems. You'll be introduced to natural language processing and its applications through examples and exercises. This will be followed by an introduction to the initial stages of solving a problem, which includes problem definition, getting text data, and preparing it for modeling. With exposure to concepts like advanced natural language processing algorithms and visualization techniques, you'll learn how to create applications that can extract information from unstructured data and present it as impactful visuals. Although you will continue to learn NLP-based techniques, the focus will gradually shift to developing useful applications. In these sections, you'll understand how to apply NLP techniques to answer questions as can be used in chatbots. By the end of this book, you'll be able to accomplish a varied range of assignments ranging from identifying the most suitable type of NLP task for solving a problem to using a tool like spacy or gensim for performing sentiment analysis. The book will easily equip you with the knowledge you need to build applications that interpret human language.
Table of Contents (10 chapters)

2. Basic Feature Extraction Methods

Activity 2: Extracting General Features from Text


Let's extract general features from the given text. Follow these steps to implement this activity:

  1. Open a Jupyter notebook.
  2. Insert a new cell and add the following code to import the necessary libraries:
    import pandas as pd
    from string import punctuation
    import nltk'tagsets')
    from import load'averaged_perceptron_tagger')
    from nltk import pos_tag
    from nltk import word_tokenize
    from collections import Counter
  3. Now let's see what different kinds of PoS nltk provides. Add the following code to do this:
    tagdict = load('help/tagsets/upenn_tagset.pickle')

    The code generates the following output:

    Figure 2.54: List of PoS
  4. The number of occurrences of each PoS is calculated by iterating through each document and annotating each word with the corresponding pos tag. Add the following...