Book Image

Data Smart

By : John W. Foreman
Book Image

Data Smart

By: John W. Foreman

Overview of this book

Data Science gets thrown around in the press like it's magic. Major retailers are predicting everything from when their customers are pregnant to when they want a new pair of Chuck Taylors. It's a brave new world where seemingly meaningless data can be transformed into valuable insight to drive smart business decisions. But how does one exactly do data science? Do you have to hire one of these priests of the dark arts, the "data scientist," to extract this gold from your data? Nope. Data science is little more than using straight-forward steps to process raw data into actionable insight. And in Data Smart, author and data scientist John Foreman will show you how that's done within the familiar environment of a spreadsheet. Why a spreadsheet? It's comfortable! You get to look at the data every step of the way, building confidence as you learn the tricks of the trade. Plus, spreadsheets are a vendor-neutral place to learn data science without the hype. But don't let the Excel sheets fool you. This is a book for those serious about learning the analytic techniques, math and the magic, behind big data.
Table of Contents (18 chapters)
Free Chapter
1
Cover
2
Credits
3
About the Author
4
About the Technical Editors
5
Acknowledgments
18
End User License Agreement

Using Bayes Rule to Create an AI Model

All right, it's time to leave my music taste behind and think on this Mandrill tweet problem. You're going to treat each tweet as a bag of words, meaning you'll break each tweet up into words (often called tokens) at spaces and punctuation. There are two classes of tweets—called app for the Mandrill.com tweets and other for everything else.

You care about these two probabilities:

  1. p(app | word1, word2, word3, …)
  2. p(other | word1, word2, word3, …)

These are the probabilities of a tweet being either about the app or about something else given that we see the words “word1,” “word2,” “word3,” etc.

The standard implementation of a naïve Bayes model classifies a new document based on which of these two classes is most likely given the words. In other words, if:

  1. p(app | word1, word2, word3, …) > p(other | word1, word2, word3, …)

then you have a tweet about...