Book Image

Mastering Social Media Mining with R

Book Image

Mastering Social Media Mining with R

Overview of this book

With an increase in the number of users on the web, the content generated has increased substantially, bringing in the need to gain insights into the untapped gold mine that is social media data. For computational statistics, R has an advantage over other languages in providing readily-available data extraction and transformation packages, making it easier to carry out your ETL tasks. Along with this, its data visualization packages help users get a better understanding of the underlying data distributions while its range of "standard" statistical packages simplify analysis of the data. This book will teach you how powerful business cases are solved by applying machine learning techniques on social media data. You will learn about important and recent developments in the field of social media, along with a few advanced topics such as Open Authorization (OAuth). Through practical examples, you will access data from R using APIs of various social media sites such as Twitter, Facebook, Instagram, GitHub, Foursquare, LinkedIn, Blogger, and other networks. We will provide you with detailed explanations on the implementation of various use cases using R programming. With this handy guide, you will be ready to embark on your journey as an independent social media analyst.
Table of Contents (13 chapters)
Mastering Social Media Mining with R
Credits
About the Authors
About the Reviewers
www.PacktPub.com
Preface
Index

Accessing product reviews from sites


Online product reviews are a very good source of information. They can be used to judge a brand or a product. It becomes very difficult to read all the reviews, so we can write a program to get the product reviews. Let's see one of the ways to extract the customer review data from Amazon. For example, let's consider the movie Transformers – Age of Extinction and see the customer reviews:

urll<- 'http://www.amazon.com/gp/video/detail/B00L83TQR6?ie=UTF8&redirect=true&ref_=s9_nwrsa_gw_g318_i1'

First, we get the relevant URL and store it in a variable so that it can be used in the functions. Then, we need to parse the HTML content of the page and save it to the variable doc. In order to do so, we need to import the package XML. Now, the parsed HTML is stored in the variable doc. Please follow the link for more details on the HTML DOM: http://www.w3schools.com/jsref/dom_obj_document.asp. The code is as follows:

library(XML)
doc<- htmlParse(urll...