Dataframes were introduced in Spark 1.3. Dataframe built on the concept of providing schemas over the data. An RDD basically consists of raw data. Although it provides various functions to process the data, it is a collection of Java objects and is involved in the overhead of garbage collection and serialization. Also, Spark SQL concepts can only be leveraged if it contains some schema. So, earlier version of a Spark provide another version of RDD called
As its name suggests, it is an RDD with schema. As it contains schema, run relation queries can be run on the data along with basic RDD functions. The
SchemaRDD can be registered as a table so that SQL queries can be executed on it using Spark SQL. It was available in earlier version of a Spark. However, with Spark Version 1.3, the
SchemaRDD was deprecated and dataframe was introduced.