In SQL, aggregation of data is very flexible. The same thing is true in Spark SQL too. Instead of running SQL statements on a single data source located in a single machine, here, Spark SQL can do the same on distributed data sources. In the chapter where RDD-based programming is covered, a MapReduce use case was discussed to do data aggregation and the same is being used here to demonstrate the aggregation capabilities of Spark SQL. In this section also, the use cases are approached in the SQL query way as well as in the DataFrame API way.
The use cases selected for elucidating the MapReduce kind of data processing are given here:
The retail banking transaction records come with account number and transaction amount in comma-separated strings
Find an account level summary of all the transactions to get the account balance
At the R REPL prompt, try the following statements:
> # Read data from a JSON file to create DataFrame
> acTransDFForAgg...