Book Image

Hands-On Natural Language Processing with Python

By : Rajesh Arumugam, Rajalingappaa Shanmugamani, Auguste Byiringiro, Chaitanya Joshi, Karthik Muthuswamy
Book Image

Hands-On Natural Language Processing with Python

By: Rajesh Arumugam, Rajalingappaa Shanmugamani, Auguste Byiringiro, Chaitanya Joshi, Karthik Muthuswamy

Overview of this book

Natural language processing (NLP) has found its application in various domains, such as web search, advertisements, and customer services, and with the help of deep learning, we can enhance its performances in these areas. Hands-On Natural Language Processing with Python teaches you how to leverage deep learning models for performing various NLP tasks, along with best practices in dealing with today’s NLP challenges. To begin with, you will understand the core concepts of NLP and deep learning, such as Convolutional Neural Networks (CNNs), recurrent neural networks (RNNs), semantic embedding, Word2vec, and more. You will learn how to perform each and every task of NLP using neural networks, in which you will train and deploy neural networks in your NLP applications. You will get accustomed to using RNNs and CNNs in various application areas, such as text classification and sequence labeling, which are essential in the application of sentiment analysis, customer service chatbots, and anomaly detection. You will be equipped with practical knowledge in order to implement deep learning in your linguistic applications using Python's popular deep learning library, TensorFlow. By the end of this book, you will be well versed in building deep learning-backed NLP applications, along with overcoming NLP challenges with best practices developed by domain experts.
Table of Contents (15 chapters)
6
Searching and DeDuplicating Using CNNs
7
Named Entity Recognition Using Character LSTM

Doc2vec

A simple extension of the Word2vec model, applied to the document level, was proposed by Mikilov et al. In this method, in order to obtain document vectors, a unique document ID is appended to the document. It is trained with the words in the document to produce an average (or concatenated) of the word embeddings, in order to produce a document embedding. Hence, in the example that we discussed earlier, the doc2vec model data would look as follows:

  • TensorFlow is an open source software library
  • Python is an open source interpreted software programming language

Contrary to the earlier approach, the document lists now look as follows:

  • [DOC_01, TensorFlow, is, an, open, source, software, library]
  • [DOC_02, Python, is, an, open, source, interpreted, software, programming, language]

This doc2vec model looks very similar to the approach that we discussed with CBOW. Hence,...