In this recipe, we'll see how to apply linear regression.
To step through this recipe, you will need a running Spark cluster in any one of the modes, for instance, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
The final step is to convert data into train and test datasets and apply the regression model on the training data:
//Split the data frame into train and test sets val train_Df = final_Df.filter(final_Df("Item_Outlet_Sales").isNotNull) val test_Df = final_Df.filter(final_Df("Item_Outlet_Sales").isNull) val train_Rdd = train_Df.rdd.map { row => val item_weight...