-
Book Overview & Buying
-
Table Of Contents
Kubernetes Autoscaling
By :
During this second use case, we’re going to continue using the same hands-on lab used when showing how KEDA can scale jobs driven by messages from a RabbitMQ queue. During this lab, you’ll also see Karpenter scaling down to zero nodes again. But this time, we’ll add a protection mechanism for voluntary disruptions (i.e., consolidation or drift) to let jobs finish. See Figure 11.2 with all the components involved in this use case.

Figure 11.2 – Autoscaling a job workload with KEDA and Karpenter
As before, we’ll explore each component from the diagram during the hands-on lab. But notice that in addition to a ScaledJob object from the Chapter 4 lab, we’re going to reuse the same NodePool as before because the protection mechanism is going to come from the pods, and not from a Karpenter NodePool.
Although during this lab we’re focusing on data processing jobs, the same pattern could apply to machine...
Change the font size
Change margin width
Change background colour