Book Image

Natural Language Processing Fundamentals

By : Sohom Ghosh, Dwight Gunning
Book Image

Natural Language Processing Fundamentals

By: Sohom Ghosh, Dwight Gunning

Overview of this book

If NLP hasn't been your forte, Natural Language Processing Fundamentals will make sure you set off to a steady start. This comprehensive guide will show you how to effectively use Python libraries and NLP concepts to solve various problems. You'll be introduced to natural language processing and its applications through examples and exercises. This will be followed by an introduction to the initial stages of solving a problem, which includes problem definition, getting text data, and preparing it for modeling. With exposure to concepts like advanced natural language processing algorithms and visualization techniques, you'll learn how to create applications that can extract information from unstructured data and present it as impactful visuals. Although you will continue to learn NLP-based techniques, the focus will gradually shift to developing useful applications. In these sections, you'll understand how to apply NLP techniques to answer questions as can be used in chatbots. By the end of this book, you'll be able to accomplish a varied range of assignments ranging from identifying the most suitable type of NLP task for solving a problem to using a tool like spacy or gensim for performing sentiment analysis. The book will easily equip you with the knowledge you need to build applications that interpret human language.
Table of Contents (10 chapters)

1. Introduction to Natural Language Processing

Activity 1: Preprocessing of Raw Text

Solution

Let's perform preprocessing on a text corpus. To implement this activity, follow these steps:

  1. Open a Jupyter notebook.
  2. Insert a new cell and add the following code to import the necessary libraries:
    import nltk
    nltk.download('punkt')
    nltk.download('averaged_perceptron_tagger')
    nltk.download('stopwords')
    nltk.download('wordnet')
    from nltk import word_tokenize
    from nltk.stem.wordnet import WordNetLemmatizer
    from nltk.corpus import stopwords
    from autocorrect import spell
    from nltk.wsd import lesk
    from nltk.tokenize import sent_tokenize
    import string
  3. Read the content of file.txt and store it in a variable named "sentence". Insert a new cell and add the following code to implement this:
    sentence = open("data_ch1/file.txt", 'r').read()
  4. Apply tokenization on the given text corpus. Insert a new cell...