Book Image

Apache Spark 2.x Cookbook

By : Rishi Yadav
Book Image

Apache Spark 2.x Cookbook

By: Rishi Yadav

Overview of this book

While Apache Spark 1.x gained a lot of traction and adoption in the early years, Spark 2.x delivers notable improvements in the areas of API, schema awareness, Performance, Structured Streaming, and simplifying building blocks to build better, faster, smarter, and more accessible big data applications. This book uncovers all these features in the form of structured recipes to analyze and mature large and complex sets of data. Starting with installing and configuring Apache Spark with various cluster managers, you will learn to set up development environments. Further on, you will be introduced to working with RDDs, DataFrames and Datasets to operate on schema aware data, and real-time streaming with various sources such as Twitter Stream and Apache Kafka. You will also work through recipes on machine learning, including supervised learning, unsupervised learning & recommendation engines in Spark. Last but not least, the final few chapters delve deeper into the concepts of graph processing using GraphX, securing your implementations, cluster optimization, and troubleshooting.
Table of Contents (19 chapters)
Title Page
Credits
About the Author
About the Reviewer
www.PacktPub.com
Customer Feedback
Preface

Understanding the Parquet format


the Google Dremel paper. In Parquet, data in a single column is stored contiguously.

Apache Parquet is a columnar data storage format, specifically designed for big data storage and processing. It is based on record shredding and the assembly algorithm from the Google Dremel paper. In Parquet, data in a single column is stored contiguously. The columnar format gives Parquet some unique benefits. For example, if you have a table with 100 columns and you mostly access 10 columns in a row-based format, you will have to load all the 100 columns, as the granularity level is at the row level. But, in Parquet, you will only need to load 10 columns. Another benefit is that since all of the data in a given column is of the same datatype (by definition), compression is much more efficient.

While we are discussing Parquet and its merits, it's a good idea to discuss the economic reason behind Parquet's success. There are two factors that every practitioner would like to...