Introducing adaptive model training
Adaptive model training is where we can change the number of GPUs during the training process. To better illustrate what we mean by changing the number of GPUs during the training process, we'll first describe how traditional distributed DNN training works with a fixed number of GPUs.
Traditional data parallel training
As shown in the preceding figure, one data parallel training paradigm is AllReduce-based. In this setting, we fix the number of workers to four. Therefore, for each training iteration, we do the following:
- Feed four batches...