There are few cases where case classes might not work; one of these cases is that the case classes cannot take more than 22 fields. Another case can be that you do not know about schema beforehand. In this approach, the data is loaded as an RDD of the Row
objects. Schema is created separately using the StructType
and StructField
objects, which represent a table and a field respectively. Schema is applied to the Row
RDD to create a DataFrame.
Start the Spark shell and give it some extra memory:
$ spark-shell --driver-memory 1G
Import for the implicit conversion:
scala> import sqlContext.implicit._
Import the Spark SQL datatypes and
Row
objects:scala> import org.apache.spark.sql._ scala> import org.apache.spark.sql.types._
In another shell, create some sample data to be put in HDFS:
$ mkdir person $ echo "Barack,Obama,53" >> person/person.txt $ echo "George,Bush,68" >> person/person.txt $ echo "Bill,Clinton,68" >...