Book Image

Spark Cookbook

By : Rishi Yadav
Book Image

Spark Cookbook

By: Rishi Yadav

Overview of this book

Table of Contents (19 chapters)
Spark Cookbook
Credits
About the Author
About the Reviewers
www.PacktPub.com
Preface
Index

Optimizing the level of parallelism


Optimizing the level of parallelism is very important to fully utilize the cluster capacity. In the case of HDFS, it means that the number of partitions is the same as the number of InputSplits, which is mostly the same as the number of blocks.

In this recipe, we will cover different ways to optimize the number of partitions.

How to do it…

Specify the number of partitions when loading a file into RDD with the following steps:

  1. Start the Spark shell:

    $ spark-shell
    
  2. Load the RDD with a custom number of partitions as a second parameter:

    scala> sc.textFile("hdfs://localhost:9000/user/hduser/words",10)
    

Another approach is to change the default parallelism by performing the following steps:

  1. Start the Spark shell with the new value of default parallelism:

    $ spark-shell --conf spark.default.parallelism=10
    
  2. Check the default value of parallelism:

    scala> sc.defaultParallelism
    

Note

You can also reduce the number of partitions using an RDD method called coalesce(numPartitions...