Book Image

Apache Spark 2.x Cookbook

By : Rishi Yadav
Book Image

Apache Spark 2.x Cookbook

By: Rishi Yadav

Overview of this book

While Apache Spark 1.x gained a lot of traction and adoption in the early years, Spark 2.x delivers notable improvements in the areas of API, schema awareness, Performance, Structured Streaming, and simplifying building blocks to build better, faster, smarter, and more accessible big data applications. This book uncovers all these features in the form of structured recipes to analyze and mature large and complex sets of data. Starting with installing and configuring Apache Spark with various cluster managers, you will learn to set up development environments. Further on, you will be introduced to working with RDDs, DataFrames and Datasets to operate on schema aware data, and real-time streaming with various sources such as Twitter Stream and Apache Kafka. You will also work through recipes on machine learning, including supervised learning, unsupervised learning & recommendation engines in Spark. Last but not least, the final few chapters delve deeper into the concepts of graph processing using GraphX, securing your implementations, cluster optimization, and troubleshooting.
Table of Contents (19 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Performing neighborhood aggregation


GraphX does most of the computation by isolating each vertex and its neighbors. It makes it easier to process the massive graph data on distributed systems. This makes the neighborhood operations very important. GraphX has a mechanism to do it at each neighborhood level in the form of the aggregateMessages method. It does it in two steps:

  1. In the first step (the first function of the method), messages are sent to the destination vertex or source vertex (similar to the Map function in MapReduce).
  2. In the second step (the second function of the method), aggregation is done on these messages (similar to the Reduce function in MapReduce).

Getting ready

Let's build a small dataset of the followers:

Follower

Followee

John

Barack

Pat

Barack

Gary

Barack

Chris

Mitt

Rob

Mitt

Our goal is to find out how many followers each node has. Let's load this data in the form of two files: nodes.csv and edges.csv.

The following is the content of nodes.csv:

1,Barack 
2,John 
3,Pat 
4,Gary 
5,Mitt...