Book Image

Mastering Social Media Mining with R

Book Image

Mastering Social Media Mining with R

Overview of this book

With an increase in the number of users on the web, the content generated has increased substantially, bringing in the need to gain insights into the untapped gold mine that is social media data. For computational statistics, R has an advantage over other languages in providing readily-available data extraction and transformation packages, making it easier to carry out your ETL tasks. Along with this, its data visualization packages help users get a better understanding of the underlying data distributions while its range of "standard" statistical packages simplify analysis of the data. This book will teach you how powerful business cases are solved by applying machine learning techniques on social media data. You will learn about important and recent developments in the field of social media, along with a few advanced topics such as Open Authorization (OAuth). Through practical examples, you will access data from R using APIs of various social media sites such as Twitter, Facebook, Instagram, GitHub, Foursquare, LinkedIn, Blogger, and other networks. We will provide you with detailed explanations on the implementation of various use cases using R programming. With this handy guide, you will be ready to embark on your journey as an independent social media analyst.
Table of Contents (13 chapters)
Mastering Social Media Mining with R
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Building additional metrics


We completed the data formatting part, and processed the data so that it can be used for our analysis. Before going to the analysis bit, let's see how to construct a few metrics, which will become a derived column in our dataset. Let's write code to create the following metrics:

  1. Identify if there is a web page associated with the repository.

  2. Count the number of characters in the description.

  3. Identify how long it had been since the repository was created, updated, and pushed.

To identify if there is a website associated with the repository, we need to look at the column homepage. We will use the function grepl to identify the presence of a dot in the column homepage, which we would consider a proxy for the presence of a website entry, as this column either holds the website details or an empty string/number.

# Flag for presence of website/webpage
ausersubset$has_web<- as.numeric(grepl(".", ausersubset$homepage))

The preceding code will create a new column named has_web...