In this recipe, we'll see how to apply Logistic regression.
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, and Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
The final step is splitting the DataFrame/RDD into train and test sets and applying logistic regression on the training set:
val final_Rdd = indexedDf.rdd.map { row => val age = row.getAs[Double]("age") val duration = row.getAs[Double]("duration") val previous = row.getAs[Double]("previous") val empvarrate = row.getAs[Double]("empvarrate") val job_0...