Book Image

Deep Reinforcement Learning Hands-On

By : Maxim Lapan
Book Image

Deep Reinforcement Learning Hands-On

By: Maxim Lapan

Overview of this book

Deep Reinforcement Learning Hands-On is a comprehensive guide to the very latest DL tools and their limitations. You will evaluate methods including Cross-entropy and policy gradients, before applying them to real-world environments. Take on both the Atari set of virtual games and family favorites such as Connect4. The book provides an introduction to the basics of RL, giving you the know-how to code intelligent learning agents to take on a formidable array of practical tasks. Discover how to implement Q-learning on 'grid world' environments, teach your agent to buy and trade stocks, and find out how natural language models are driving the boom in chatbots.
Table of Contents (23 chapters)
Deep Reinforcement Learning Hands-On
Contributors
Preface
Other Books You May Enjoy
Index

Dueling DQN


This improvement to DQN was proposed in 2015, in the paper called Dueling Network Architectures for Deep Reinforcement Learning ([8] Wang et al., 2015). The core observation of this paper lies in the fact that the Q-values Q(s, a) our network is trying to approximate can be divided into quantities: the value of the state V(s) and the advantage of actions in this state A(s, a). We've seen quantity V(s) before, as it was the core of the value iteration method from Chapter 5, Tabular Learning and the Bellman Equation. It just equals to the discounted expected reward achievable from this state. The advantage A(s, a) is supposed to bridge the gap from A(s) to Q(s, a), as, by definition: Q(s, a) = V(s) + A(s, a). In other words, the advantage A(s, a) is just the delta, saying how much extra reward some particular action from the state brings us. Advantage could be positive or negative and, in general, can have any magnitude. For example, at some tipping point, the choice of one action...