Book Image

Python Social Media Analytics

By : Baihaqi Siregar, Siddhartha Chatterjee, Michal Krystyanczuk
Book Image

Python Social Media Analytics

By: Baihaqi Siregar, Siddhartha Chatterjee, Michal Krystyanczuk

Overview of this book

Social Media platforms such as Facebook, Twitter, Forums, Pinterest, and YouTube have become part of everyday life in a big way. However, these complex and noisy data streams pose a potent challenge to everyone when it comes to harnessing them properly and benefiting from them. This book will introduce you to the concept of social media analytics, and how you can leverage its capabilities to empower your business. Right from acquiring data from various social networking sources such as Twitter, Facebook, YouTube, Pinterest, and social forums, you will see how to clean data and make it ready for analytical operations using various Python APIs. This book explains how to structure the clean data obtained and store in MongoDB using PyMongo. You will also perform web scraping and visualize data using Scrappy and Beautifulsoup. Finally, you will be introduced to different techniques to perform analytics at scale for your social data on the cloud, using Python and Spark. By the end of this book, you will be able to utilize the power of Python to gain valuable insights from social media data and use them to enhance your business processes.
Table of Contents (17 chapters)
Title Page
Credits
About the Authors
Acknowledgments
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Data pull


The amount of data we collect through GitHub API is such that it fits in memory. We can deal with it directly in a pandas dataframe. If more data is required, we would recommend storing it in a database, such as MongoDB.

We use JSON tools to convert the results into a clean JSON and to create a dataframe.

from pandas.io.json import json_normalize 
import json 
import pandas as pd 
import bson.json_util as json_util
 
sanitized = json.loads(json_util.dumps(results)) 
normalized = json_normalize(sanitized) 
df = pd.DataFrame(normalized) 

The dataframe df contains columns related to all the results returned by GitHub API. We can list them by typing the following:

df.columns 
 
Index(['archive_url', 'assignees_url', 'blobs_url', 'branches_url', 
       'clone_url', 'collaborators_url', 'comments_url', 'commits_url', 
       'compare_url', 'contents_url', 'contributors_url', 'default_branch', 
       'deployments_url', 'description', 'downloads_url', 'events_url', 
       'fork', 
    ...