Book Image

NLTK Essentials

By : Nitin Hardeniya
Book Image

NLTK Essentials

By: Nitin Hardeniya

Overview of this book

<p>Natural Language Processing (NLP) is the field of artificial intelligence and computational linguistics that deals with the interactions between computers and human languages. With the instances of human-computer interaction increasing, it’s becoming imperative for computers to comprehend all major natural languages. Natural Language Toolkit (NLTK) is one such powerful and robust tool.</p> <p>You start with an introduction to get the gist of how to build systems around NLP. We then move on to explore data science-related tasks, following which you will learn how to create a customized tokenizer and parser from scratch. Throughout, we delve into the essential concepts of NLP while gaining practical insights into various open source tools and libraries available in Python for NLP. You will then learn how to analyze social media sites to discover trending topics and perform sentiment analysis. Finally, you will see tools which will help you deal with large scale text.</p> <p>By the end of this book, you will be confident about NLP and data science concepts and know how to apply them in your day-to-day work.</p>
Table of Contents (17 chapters)
NLTK Essentials
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Your turn


Here are the answers to the open-ended questions:

  • Try to connect any of the data base using pyodbc.

    https://code.google.com/p/pyodbc/wiki/GettingStarted

  • Can you build a regex tokenizer that will only select words that are either small, capitals, numbers or money symbols?

    [\w+] selects all the words and numbers [a-z A-Z 0-9] and [\$] will match money symbol.

  • What's the difference between Stemming and lemmatization?

    Stemming is more of a rule-based approach to get the root of the word's grammatical forms, while lemmatization also considers context and the POS of the given word, then applies rules specific to grammatical variants. Stemmers are easier to implement and the processing time is faster than lemmatizer.

  • Can you come up with a Porter stemmer (Rule-based) for your native language?

    Hint: http://tartarus.org/martin/PorterStemmer/python.txt

    http://Snowball.tartarus.org/algorithms/english/stemmer.html

  • Can we perform other NLP operations after stop word removal?

    No; never. All the typical...