Book Image

Learning Apache Spark 2

Book Image

Learning Apache Spark 2

Overview of this book

Apache Spark has seen an unprecedented growth in terms of its adoption over the last few years, mainly because of its speed, diversity and real-time data processing capabilities. It has quickly become the preferred choice of tool for many Big Data professionals looking to find quick insights from large chunks of data. This book introduces you to the Apache Spark framework, and familiarizes you with all the latest features and capabilities introduced in Spark 2. Starting with a detailed introduction to Spark’s architecture and the installation procedure, this book covers everything you need to know about the Spark framework in the most practical manner. You will learn how to perform the basic ETL activities using Spark, and work with different components of Spark such as Spark SQL, as well as the Dataset and DataFrame APIs for manipulating your data. Then, you will perform machine learning using Spark MLlib, as well as perform streaming analytics and graph processing using the Spark Streaming and GraphX modules respectively. The book also gives special emphasis on deploying your Spark models, and how they can be operated in a clustered mode. During the course of the book, you will come across implementations of different real-world use-cases and examples, giving you the hands-on knowledge you need to use Apache Spark in the best possible manner.
Table of Contents (18 chapters)
Learning Apache Spark 2
Credits
About the Author
About the Reviewers
www.packtpub.com
Customer Feedback
Preface

Parquet files


Apache Parquet is a common columnar format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model, and programming language. Parquet's design was based on Google's Dremel paper and is considered to be one of the best performing data formats in a number of scenarios. We'll not go into too much detail around Parquet, but if you are interested you might want to have a read at https://parquet.apache.org/. In order to show how Spark can work with Parquet files, we will write the CDR JSON file as a Parquet file, and then load it before doing some basic data manipulation.

Example: Scala - Reading/Writing Parquet Files

#Reading a JSON file as a DataFrame
val callDetailsDF = spark.read.json("/home/spark/sampledata/json/cdrs.json")
# Write the DataFrame out as a Parquet File
callDetailsDF.write.parquet("../../home/spark/sampledata/cdrs.parquet")
# Loading the Parquet File as a DataFrame
val callDetailsParquetDF...