Book Image

Python Data Analysis

By : Ivan Idris
Book Image

Python Data Analysis

By: Ivan Idris

Overview of this book

Table of Contents (22 chapters)
Python Data Analysis
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Key Concepts
Online Resources
Index

The bag-of-words model


In the bag-of-words model, we create from a document a bag containing words found in the document. In this model, we don't care about the word order. For each word in the document, we count the number of occurrences. With these word counts, we can do statistical analysis, for instance, to identify spam in e-mail messages.

If we have a group of documents, we can view each unique word in the corpus as a feature; here, "feature" means parameter or variable. Using all the word counts, we can build a feature vector for each document; "vector" is used here in the mathematical sense. If a word is present in the corpus but not in the document, the value of this feature will be 0. Surprisingly, NLTK doesn't have a handy utility currently to create a feature vector. However, the machine learning Python library, scikit-learn, does have a CountVectorizer class that we can use. In the next chapter, Chapter 10, Predictive Analytics and Machine Learning, we will do more with scikit...