We have already discussed the concept of advantage a few times throughout a few previous chapters including the last exercise. Advantage is often thought of as understanding the difference between applying different agents/policies to the same problem. The algorithm learns the advantage and, in turn, the benefits it provides to enhancing reward. This is a bit abstract so let's see how this applies to one of our previous algorithms like DDQN. With DDQN, advantage was defined by understanding how to narrow the gap in moving to a known target or goal. Refer back to Chapter 7, Going Deeper with DDQN, if you need a refresher.
The concept of advantage can be extended to what we refer to as actor-critic methods. With actor-critic, we define advantage by training two networks, one as an actor; that is, it makes decisions on the policy, and another network...