Book Image

Frank Kane's Taming Big Data with Apache Spark and Python

By : Frank Kane
Book Image

Frank Kane's Taming Big Data with Apache Spark and Python

By: Frank Kane

Overview of this book

Frank Kane’s Taming Big Data with Apache Spark and Python is your companion to learning Apache Spark in a hands-on manner. Frank will start you off by teaching you how to set up Spark on a single system or on a cluster, and you’ll soon move on to analyzing large data sets using Spark RDD, and developing and running effective Spark jobs quickly using Python. Apache Spark has emerged as the next big thing in the Big Data domain – quickly rising from an ascending technology to an established superstar in just a matter of years. Spark allows you to quickly extract actionable insights from large amounts of data, on a real-time basis, making it an essential tool in many modern businesses. Frank has packed this book with over 15 interactive, fun-filled examples relevant to the real world, and he will empower you to understand the Spark ecosystem and implement production-grade real-time Spark projects with ease.
Table of Contents (13 chapters)
Title Page
Credits
About the Author
www.PacktPub.com
Customer Feedback
Preface
7
Where to Go From Here? – Learning More About Spark and Data Science

Sorting the word count results


Okay, let's do one more round of improvements on our word-count script. We need to sort our results of word-count by something useful. Instead of just having a random list of words associated with how many times they appear, what we want is to see the least used words at the beginning of our list and the most used words at the end. This should give us some actually interesting information to look at. To do this, we're going to need to manipulate our results a little bit more directly-we can't just cheat and use countByValue and call it done.

Step 1 - Implement countByValue() the hard way to create a new RDD

So the first thing we're going to do is actually implement what countByValue does by hand, the hard way. This way we can actually play with the results more directly and stick the results in an RDD instead of just getting a Python object that we need to deal with at that point. The way we do that is we take our map of words-words.map-and we use a mapper that...