We already know that Spark can be used for processing a large amount of data. Spark Streaming is an extension of Spark API to enable the processing of stream data. It supports a large variety of input data sources, including Twitter, HDFS, Kafka, Flume, Akka Actor, TCP sockets, and ZeroMQ. Spark Streaming breaks up the input data stream in small batches, and this discretized stream is then processed by the Spark program. The processed batches can be routed for further processing or can be stored on HDFS, databases, and so on.
Spark Streaming has a basic abstraction of DStream or discretized streams (http://www.cs.berkeley.edu/~matei/papers/2012/hotcloud_spark_streaming.pdf). Internally, DStreams are represented as a sequence of RDDs, and the operations on DStreams are applied to the operations on the RDDs in the DStreams. It has all the benefits of the RDDs, such as persistence, check-pointing, and so on. The following figure shows how Spark enables stream processing.