Book Image

Spark Cookbook

By : Rishi Yadav
Book Image

Spark Cookbook

By: Rishi Yadav

Overview of this book

Table of Contents (19 chapters)
Spark Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Programmatically specifying the schema


There are few cases where case classes might not work; one of these cases is that the case classes cannot take more than 22 fields. Another case can be that you do not know about schema beforehand. In this approach, the data is loaded as an RDD of the Row objects. Schema is created separately using the StructType and StructField objects, which represent a table and a field respectively. Schema is applied to the Row RDD to create a DataFrame.

How to do it...

  1. Start the Spark shell and give it some extra memory:

    $ spark-shell --driver-memory 1G
    
  2. Import for the implicit conversion:

    scala> import sqlContext.implicit._
    
  3. Import the Spark SQL datatypes and Row objects:

    scala> import org.apache.spark.sql._
    scala> import org.apache.spark.sql.types._
    
  4. In another shell, create some sample data to be put in HDFS:

    $ mkdir person
    $ echo "Barack,Obama,53" >> person/person.txt
    $ echo "George,Bush,68" >> person/person.txt
    $ echo "Bill,Clinton,68" &gt...