There are a few cases where case classes might not work; one of these cases is where case classes cannot take more than 22 fields. Another case can be that you do not know about the schema beforehand. In this approach, data is loaded as an RDD of the Row
objects. The schema is created separately using the StructType
and StructField
objects, which represent a table and a field, respectively. The schema is applied to the Row
RDD to create a DataFrame.
- Start the Spark shell or Databricks Cloud Scala notebook:
$ spark-shell
- Import the Spark SQL datatypes and
Row
objects:
scala> import org.apache.spark.sql._ scala> import org.apache.spark.sql.types._
- Create the schema using the
StructType
andStructField
objects. TheStructField
object takes parameters in the form of param name, param type, and nullability:
scala> val schema = StructType( Array(StructField("first_name",StringType,true), StructField("last_name",StringType...