Book Image

R Data Analysis Projects

Book Image

R Data Analysis Projects

Overview of this book

R offers a large variety of packages and libraries for fast and accurate data analysis and visualization. As a result, it’s one of the most popularly used languages by data scientists and analysts, or anyone who wants to perform data analysis. This book will demonstrate how you can put to use your existing knowledge of data analysis in R to build highly efficient, end-to-end data analysis pipelines without any hassle. You’ll start by building a content-based recommendation system, followed by building a project on sentiment analysis with tweets. You’ll implement time-series modeling for anomaly detection, and understand cluster analysis of streaming data. You’ll work through projects on performing efficient market data research, building recommendation systems, and analyzing networks accurately, all provided with easy to follow codes. With the help of these real-world projects, you’ll get a better understanding of the challenges faced when building data analysis pipelines, and see how you can overcome them without compromising on the efficiency or accuracy of your systems. The book covers some popularly used R packages such as dplyr, ggplot2, RShiny, and others, and includes tips on using them effectively. By the end of this book, you’ll have a better understanding of data analysis with R, and be able to put your knowledge to practical use without any hassle.
Table of Contents (15 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Building a sentiment classifier


In the beginning of the chapter we devoted a section to understand kernel density estimation and how it can be leveraged to approximate the probability density function for the given samples from a random variable. We are going to use it in this section.

We have a set of tweets positively labeled. Another set of tweets negatively labeled. The idea is to learn the PDF of these two data sets independently using kernel density estimation.

From Bayes rule, we know that

P(Label | x)  =  P(x| label) * P(label) / P(x)

Here, P(x | label) is the likelihood, P(label) is prior, and P(x) is the evidence. Here the label can be positive sentiment or negative sentiment.

Using the PDF learned from kernel density estimation, we can easily calculate the likelihood, P(x | label)

From our class distribution, we know the prior P(label)

For any new tweet, we can now calculate using the Bayes Rule,

P(Label = Positive | words and their delta tfidf weights)
P(Label = Negative | words and their...