The jobs we have been running so far are designed to be submitted onto a predefined host for execution; this is the so-called static scheduling. There is nothing wrong with it and, in fact, this is how batch processing happened originally, right back to the OS built-in scheduling tool (for example, CRON) days. It is absolutely fine if there is only a handful of non-critical jobs running on fixed dates at fixed times. However, as we progress into event triggering, the batch workload no longer stays the same and becomes subject to a number of external events, generated at a given time. Peak periods are hard to predict. Too many events generated around the same period can overload the job execution machine and thereby delay the delivery of processing outcome. On top of that, an outage may not be affordable for critical processing requests due to the business's real time expectation.
As famous American investor and author Robert Kiyosak, said, "Inside every...