Though the local filesystem is not a good fit to store big data due to disk size limitations and lack of distributed nature, technically you can load data in distributed systems using the local filesystem. But then the file/directory you are accessing has to be available on each node.
Please note that if you are planning to use this feature to load side data, it is not a good idea. To load side data, Spark has a broadcast variable feature, which will be discussed in upcoming chapters.
In this recipe, we will look at how to load data in Spark from the local filesystem.
Let's start with the example of Shakespeare's "to be or not to be":
Create the
words
directory by using the following command:$ mkdir words
Get into the
words
directory:$ cd words
Create the
sh.txt
text file and enter"to be or not to be"
in it:$ echo "to be or not to be" > sh.txt
Start the Spark shell:
$ spark-shell
Load the
words
directory as RDD:scala> val words...