Book Image

Spark Cookbook

By : Rishi Yadav
Book Image

Spark Cookbook

By: Rishi Yadav

Overview of this book

Table of Contents (19 chapters)
Spark Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Loading and saving data using the Parquet format


Apache Parquet is a columnar data storage format, specifically designed for big data storage and processing. Parquet is based on record shredding and assembly algorithm in the Google Dremel paper. In Parquet, data in a single column is stored contiguously.

The columnar format gives Parquet some unique benefits. For example, if you have a table with 100 columns and you mostly access 10 columns, in a row-based format you will have to load all 100 columns, as granularity level is at row level. But, in Parquet, you will only load 10 columns. Another benefit is that since all the data in a given column is of the same datatype (by definition), compression is much more efficient.

How to do it...

  1. Open the terminal and create the person data in a local file:

    $ mkdir person
    $ echo "Barack,Obama,53" >> person/person.txt
    $ echo "George,Bush,68" >> person/person.txt
    $ echo "Bill,Clinton,68" >> person/person.txt
    
  2. Upload the person directory...