Book Image

Mastering Social Media Mining with R

Book Image

Mastering Social Media Mining with R

Overview of this book

With an increase in the number of users on the web, the content generated has increased substantially, bringing in the need to gain insights into the untapped gold mine that is social media data. For computational statistics, R has an advantage over other languages in providing readily-available data extraction and transformation packages, making it easier to carry out your ETL tasks. Along with this, its data visualization packages help users get a better understanding of the underlying data distributions while its range of "standard" statistical packages simplify analysis of the data. This book will teach you how powerful business cases are solved by applying machine learning techniques on social media data. You will learn about important and recent developments in the field of social media, along with a few advanced topics such as Open Authorization (OAuth). Through practical examples, you will access data from R using APIs of various social media sites such as Twitter, Facebook, Instagram, GitHub, Foursquare, LinkedIn, Blogger, and other networks. We will provide you with detailed explanations on the implementation of various use cases using R programming. With this handy guide, you will be ready to embark on your journey as an independent social media analyst.
Table of Contents (13 chapters)
Mastering Social Media Mining with R
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

The order of stories on a user's home page


In Facebook, when we open the home page we see multiple newsfeeds. These newsfeed are updated continuously, let's try to imitate the same in R. The following code will sort the newsfeeds in an order based on the interactions, as well as the recency of publishing. If you face any problems here, check the version of the API and retry with the API of Version 2.3. The code is as follows:

newsfeed<- getNewsfeed(token, n = 200)
head(newsfeed, 20)
newsfeed$datetime<- format.facebook.date(newsfeed$created_time)
currdate<- Sys.time()
maxdiff<- max(difftime(currdate, newsfeed$datetime, units="hours"))
newsfeed$priority<- maxdiff - difftime(currdate, newsfeed$datetime, units="hours")
newsfeed$priority<- as.numeric(newsfeed$priority)
fnpriority<- function(x){(x-min(x))/(max(x)-min(x))}
newsfeed$priority<- fnpriority(newsfeed$priority) *100
newsfeed$plikes_count<- fnpriority(newsfeed$likes_count) *100
newsfeed$pcomments_count<- fnpriority...