In this chapter, we started with a
Hello World program and setting up an IDE (Eclipse) for executing Spark Jobs. Then we discussed various RDD Transformation and various common transformations such as
mapToPair, and so on. We also explored commonly used RDD actions and some of the use cases associated with it. We also gained some understanding in improving Spark Job performance by using Apache Spark's inbuilt cache and persist mechanism.
The next chapter will focus on the interactional aspect of Apache Spark as far as the Data and Storage layer is concerned. We will learn about Spark integration with external storage systems such as HDFS, S3 etc and its ability to process various data formats such as xml, json etc.