Book Image

Frank Kane's Taming Big Data with Apache Spark and Python

By : Frank Kane
Book Image

Frank Kane's Taming Big Data with Apache Spark and Python

By: Frank Kane

Overview of this book

Frank Kane’s Taming Big Data with Apache Spark and Python is your companion to learning Apache Spark in a hands-on manner. Frank will start you off by teaching you how to set up Spark on a single system or on a cluster, and you’ll soon move on to analyzing large data sets using Spark RDD, and developing and running effective Spark jobs quickly using Python. Apache Spark has emerged as the next big thing in the Big Data domain – quickly rising from an ascending technology to an established superstar in just a matter of years. Spark allows you to quickly extract actionable insights from large amounts of data, on a real-time basis, making it an essential tool in many modern businesses. Frank has packed this book with over 15 interactive, fun-filled examples relevant to the real world, and he will empower you to understand the Spark ecosystem and implement production-grade real-time Spark projects with ease.
Table of Contents (13 chapters)
Title Page
Credits
About the Author
www.PacktPub.com
Customer Feedback
Preface
7
Where to Go From Here? – Learning More About Spark and Data Science

Running the script - discover who the most popular superhero is


Let's dive into the code for finding the most popular superhero in the Marvel Universe and get our answer. Who will it be? We'll find out soon. Go to the download package for this book and you're going to download three things: the Marvel-graph.txt data file, which contains our social network of superheroes, the Marvel-names.txt file, which maps superhero IDs to human-readable names, and finally, the most-popular-superhero script. Download all that into your SparkCourse folder and then open up most-popular-superhero.py in your Python environment:

Alright, let's see what's going on here. We have the usual stuff at the top so let's get down to the meat of it

Mapping input data to (hero ID, number of co-occurrences) per line

The first thing we do, if you look at line 14, is load up our names.txt file into an RDD called names using sc.textFile:

names = sc.textFile("file:///SparkCourse/marvel-names.txt") 

We're going to do this name...