Spark provides a unified runtime for big data. Hadoop Distributed File System (HDFS) has traditionally been the most used storage platform for Spark as it has provided the most cost-effective storage for unstructured and semi-structured data on commodity hardware. This has been upended by public cloud storage systems, especially Amazon S3. This edition of the book reflects that reality with special emphasis on connectivity to S3.
That being said, Spark exclusively leverages Hadoop's InputFormat
and OutputFormat
interfaces. InputFormat
is responsible for creating InputSplits
from input data and dividing it further into records. OutputFormat
is responsible for writing to storage. Following image illustrates InputFormat metaphorically:
We will start by writing to the local filesystem and then move over to loading data from HDFS. In the Loading data from HDFS recipe, we will cover the most common file format: regular text files. We will also explore loading data stored in Amazon S3...