In this recipe, we'll see how to apply online k-means on streaming data.
To work through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the build.sbt
file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java. This recipe also requires Kafka and Zookeeper running on the cluster. We are going to run the algorithm on real data.
Let's try to build a real-time network detection system using Spark streaming and MLlib. Here is the code which processes the real time data, performs pre-processing and applies k-means algorithm on live data:
import java.net.InetAddress import _root_.kafka.serializer.StringDecoder import kafka.server...