In the previous two recipes, we developed two FA learning algorithms: off-policy and on-policy, respectively. In this recipe, we will improve the performance of off-policy Q-learning by incorporating experience replay.
Experience replay means we store the agent's experiences during an episode instead of running Q-learning. The learning phase with experience replay becomes two phases: gaining experience and updating models based on the experience obtained after an episode finishes.Specifically, the experience (also called the buffer, or memory) includes the past state, the action taken, the reward received, and the next state for individual steps in an episode.
In the learning phase, a certain number of data points are randomly sampled from the experience and are used to train the learning models. Experience replay can stabilize...