Book Image

Python Social Media Analytics

By : Baihaqi Siregar, Siddhartha Chatterjee, Michal Krystyanczuk
Book Image

Python Social Media Analytics

By: Baihaqi Siregar, Siddhartha Chatterjee, Michal Krystyanczuk

Overview of this book

Social Media platforms such as Facebook, Twitter, Forums, Pinterest, and YouTube have become part of everyday life in a big way. However, these complex and noisy data streams pose a potent challenge to everyone when it comes to harnessing them properly and benefiting from them. This book will introduce you to the concept of social media analytics, and how you can leverage its capabilities to empower your business. Right from acquiring data from various social networking sources such as Twitter, Facebook, YouTube, Pinterest, and social forums, you will see how to clean data and make it ready for analytical operations using various Python APIs. This book explains how to structure the clean data obtained and store in MongoDB using PyMongo. You will also perform web scraping and visualize data using Scrappy and Beautifulsoup. Finally, you will be introduced to different techniques to perform analytics at scale for your social data on the cloud, using Python and Spark. By the end of this book, you will be able to utilize the power of Python to gain valuable insights from social media data and use them to enhance your business processes.
Table of Contents (17 chapters)
Title Page
Credits
About the Authors
Acknowledgments
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Topic models at scale


For the final Spark example, we will do a simple topic modelling using MLLib (the Spark machine learning library) on our corpus.

We will use nouns as the features for our documents. First we will import the required classes:

from pyspark.mllib.clustering import LDA, LDAModel 
from pyspark.mllib.linalg import Vectors 

We will build the vocabulary from the noun word count RDD:

vocabulary = noun_word_count.map(lambda w: w[0]).collect() 

Next, we need to transform the chunks corpus into a list of nouns per document:

doc_nouns = chunks \ 
    .map(lambda chunks: filter( 
            lambda chunk: chunk.part_of_speech == 'NP', 
            chunks 
        )) \ 
    .filter(lambda chunks: len(chunks) > 0) \ 
    .map(lambda chunks: list(chain.from_iterable(map( 
            lambda chunk: chunk.words, 
            chunks 
        )))) \ 
    .map(lambda words: filter( 
            lambda word: match_noun_like_pos(word.part_of_speech), 
            words 
        )) \ 
    .filter...