#### Overview of this book

Deep Reinforcement Learning Hands-On is a comprehensive guide to the very latest DL tools and their limitations. You will evaluate methods including Cross-entropy and policy gradients, before applying them to real-world environments. Take on both the Atari set of virtual games and family favorites such as Connect4. The book provides an introduction to the basics of RL, giving you the know-how to code intelligent learning agents to take on a formidable array of practical tasks. Discover how to implement Q-learning on 'grid world' environments, teach your agent to buy and trade stocks, and find out how natural language models are driving the boom in chatbots.
Deep Reinforcement Learning Hands-On
Contributors
Preface
Other Books You May Enjoy
Free Chapter
What is Reinforcement Learning?
OpenAI Gym
Deep Learning with PyTorch
The Cross-Entropy Method
Tabular Learning and the Bellman Equation
Deep Q-Networks
DQN Extensions
The Actor-Critic Method
Chatbots Training with RL
Continuous Action Space
Trust Regions – TRPO, PPO, and ACKTR
Black-Box Optimization in RL
Beyond Model-Free – Imagination
AlphaGo Zero
Index

## Theoretical background of the cross-entropy method

This section is optional and included for readers who are interested in why the method works. If you wish, you can refer to the original paper on cross-entropy, which will be given at the end of the section.

The basis of the cross-entropy method lies in the importance sampling theorem, which states this:

In our RL case, H(x) is a reward value obtained by some policy x and p(x) is a distribution of all possible policies. We don't want to maximize our reward by searching all possible policies, instead we want to find a way to approximate p(x)H(x) by q(x), iteratively minimizing the distance between them. The distance between two probability distributions is calculated by Kullback-Leibler (KL) divergence which is as follows:

The first term in KL is called entropy and doesn't depend on that, so could be omitted during the minimization. The second term is called cross-entropy and is a very common optimization objective in DL.

Combining both formulas...