One of the approaches to improving the stability of the Policy Gradient (PG) family of methods is to use multiple environments in parallel. The reason behind this is the fundamental problem we discussed in Chapter 6, Deep Q-Networks, when we talked about the correlation between samples, which breaks the independent and identically distributed (i.i.d) assumption, which is critical for Stochastic Gradient Descent (SGD) optimization. The negative consequence of such correlation is very high variance in gradients, which means that our training batch contains very similar examples, all of them pushing our network in the same direction. However, this may be totally the wrong direction in the global sense, as all those examples could be from one single lucky or unlucky episode.
With our Deep Q-Network (DQN), we solved the issue by storing a large amount of previous states in the replay buffer and sampling our training batch from this buffer. If the buffer is large...