In this recipe, we'll see how to identify the required variables for analysis and understand their description.
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
Let's take an example of network data which contains information on network access and a variety of network intrusions. This is the NSL-KDD dataset, which is a refined version of the KDD'99 dataset. Although the dataset has intrusions being represented as labels, we use the k-means clustering algorithm which is an unsupervised learning approach to cluster the dataset into normal and four major attack categories...