Optimizing the level of parallelism is very important to fully utilize the cluster capacity. In the case of HDFS, it means that the number of partitions is the same as the number of input splits, which is mostly the same as the number of blocks. The default block size in HDFS is 128 MB, and that works well in case of Spark as well.
In this recipe, we will cover different ways to optimize the number of partitions.
Specify the number of partitions when loading a file into RDD with the following steps:
- Start the Spark shell:
$ spark-shell
- Load the RDD with a custom number of partitions as a second parameter:
scala> sc.textFile("hdfs://localhost:9000/user/hduser/words",10)
Another approach is to change the default parallelism by performing the following steps:
- Start the Spark shell with the new value of default parallelism:
$ spark-shell --conf spark.default.parallelism=10