Users of Spark have three different APIs to interact with distributed collections of data: the RDD API, the DataFrames API, and the new Dataset API. Traditional RDD APIs provide type safety and powerful lambda functions but not optimized performance. The Dataset API and the DataFrame API provide easier ways to work with domain-specific language and provide superior performance over RDDs. The Dataset API combines both RDDs and DataFrames. Users have a choice to work with RDDs, DataFrames, or Datasets depending on their needs. But, in general, DataFrame or Dataset are preferred over conventional RDDs for better performance. Spark SQL uses a catalyst optimizer under the hood to provide optimization.
Dataset/DataFrame APIs provide optimization, speed, automatic schema discovery, multiple sources support, multiple language support, and predicate pushdown; moreover, they are interoperable with RDDs and Datasets. The Dataset API was introduced in version 1.6, which is available in Scala...