Book Image

Python Natural Language Processing

Book Image

Python Natural Language Processing

Overview of this book

This book starts off by laying the foundation for Natural Language Processing and why Python is one of the best options to build an NLP-based expert system with advantages such as Community support, availability of frameworks and so on. Later it gives you a better understanding of available free forms of corpus and different types of dataset. After this, you will know how to choose a dataset for natural language processing applications and find the right NLP techniques to process sentences in datasets and understand their structure. You will also learn how to tokenize different parts of sentences and ways to analyze them. During the course of the book, you will explore the semantic as well as syntactic analysis of text. You will understand how to solve various ambiguities in processing human language and will come across various scenarios while performing text analysis. You will learn the very basics of getting the environment ready for natural language processing, move on to the initial setup, and then quickly understand sentences and language parts. You will learn the power of Machine Learning and Deep Learning to extract information from text data. By the end of the book, you will have a clear understanding of natural language processing and will have worked on multiple examples that implement NLP in the real world.
Table of Contents (13 chapters)

Advantages of word2vec

As we have seen, word2vec is a very good technique for generating distributional similarity. There are other advantages of it as well, which I've listed here:

  • Word2vec concepts are really easy to understand. They are not so complex that you really don't know what is happening behind the scenes.
  • Using word2vec is simple and it has very powerful architecture. It is fast to train compared to other techniques.
  • Human effort for training is really minimal because, here, human tagged data is not needed.
  • This technique works for both a small amount of datasets and a large amount of datasets. So it is an easy-to-scale model.
  • Once you understand the concept and algorithms, then you can replicate the whole concept and algorithms on your dataset as well.
  • It does exceptionally well on capturing semantic similarity.
  • As this is a kind of unsupervised approach...