Book Image

Apache Spark 2: Data Processing and Real-Time Analytics

By : Romeo Kienzler, Md. Rezaul Karim, Sridhar Alla, Siamak Amirghodsi, Meenakshi Rajendran, Broderick Hall, Shuen Mei
Book Image

Apache Spark 2: Data Processing and Real-Time Analytics

By: Romeo Kienzler, Md. Rezaul Karim, Sridhar Alla, Siamak Amirghodsi, Meenakshi Rajendran, Broderick Hall, Shuen Mei

Overview of this book

Apache Spark is an in-memory, cluster-based data processing system that provides a wide range of functionalities such as big data processing, analytics, machine learning, and more. With this Learning Path, you can take your knowledge of Apache Spark to the next level by learning how to expand Spark's functionality and building your own data flow and machine learning programs on this platform. You will work with the different modules in Apache Spark, such as interactive querying with Spark SQL, using DataFrames and datasets, implementing streaming analytics with Spark Streaming, and applying machine learning and deep learning techniques on Spark using MLlib and various external tools. By the end of this elaborately designed Learning Path, you will have all the knowledge you need to master Apache Spark, and build your own big data processing and analytics pipeline quickly and without any hassle. This Learning Path includes content from the following Packt products: • Mastering Apache Spark 2.x by Romeo Kienzler • Scala and Spark for Big Data Analytics by Md. Rezaul Karim, Sridhar Alla • Apache Spark 2.x Machine Learning Cookbook by Siamak Amirghodsi, Meenakshi Rajendran, Broderick Hall, Shuen MeiCookbook
Table of Contents (23 chapters)
Title Page
Copyright
About Packt
Contributors
Preface
Index

Graph analytics/processing with GraphX


This section will examine Apache Spark GraphX programming in Scala using the family relationship graph data sample shown in the last section. This data will be accessed as a list of vertices and edges. Although this data set is small, the graphs that you build in this way could be very large. For example, we've been able to analyze 30 TB of financial transaction data of a large bank using only four Apache Spark workers.

The raw data

We are working with two data files. They contain the data that will be used for this section in terms of the vertices and edges that make up a graph:

graph1_edges.csv
graph1_vertex.csv

The vertex file contains just six lines representing the graph used in the last section. Each vertex represents a person and has a vertex ID number, a name, and an age value:

1,Mike,48
2,Sarah,45
3,John,25
4,Jim,53
5,Kate,22
6,Flo,52

The edge file contains a set of directed edge values in the form source vertex ID, destination vertex ID, and relationship...