In the Spark Streaming data processing application discussed in the previous section, assume that there is a need to count the number of log event messages containing the keyword ERROR in the previous three batches. In other words, there should be the ability to count the number of such event messages across a window of three batches. At any given point in time, the window should be sliding along with time as and when a new batch of data is available. Three important terms have been discussed here, and Figure 7 explains them. They are:
Batch interval: The time interval at which a DStream is produced
Window length: The duration of the number of batch intervals where there is a need to peek into all the DStreams produced in those batch intervals
Sliding interval: The interval at which the window operation, such as counting the event messages, is performed
In Figure 7, at a given point in time, the DStreams used for the operation to be performed are enclosed...