In this recipe, we will explore Random Forest in Spark. We will use the Random Forest technique to solve a discrete classification problem. We random forest implementation very fast due to Spark's exploitation of parallelism (growing many trees at once). We also do not need to worry too much about the hyper-parameters and technically we can get away with just setting the number of trees.
- Start a new project in IntelliJ or in an IDE of your choice. Make sure the necessary JAR files are included.
- Set up the package location where the program will reside:
package spark.ml.cookbook.chapter10
- Import the necessary packages for the Spark context to get access to the cluster and
Log4j.Logger
to reduce the amount of output produced by Spark:
import org.apache.spark.mllib.evaluation.MulticlassMetrics import org.apache.spark.mllib.linalg.Vectors import org.apache.spark.mllib.regression.LabeledPoint import org.apache...