Book Image

Deep Reinforcement Learning Hands-On

By : Maxim Lapan
Book Image

Deep Reinforcement Learning Hands-On

By: Maxim Lapan

Overview of this book

Deep Reinforcement Learning Hands-On is a comprehensive guide to the very latest DL tools and their limitations. You will evaluate methods including Cross-entropy and policy gradients, before applying them to real-world environments. Take on both the Atari set of virtual games and family favorites such as Connect4. The book provides an introduction to the basics of RL, giving you the know-how to code intelligent learning agents to take on a formidable array of practical tasks. Discover how to implement Q-learning on 'grid world' environments, teach your agent to buy and trade stocks, and find out how natural language models are driving the boom in chatbots.
Table of Contents (23 chapters)
Deep Reinforcement Learning Hands-On
Contributors
Preface
Other Books You May Enjoy
Index

Results


Let’s now take a look at the results.

The feed-forward model

The convergence on Yandex data for one year requires about 10M training steps, which can take a while (GTX 1080Ti trains at a speed of 230-250 steps per second). During training, we have several charts in TensorBoard showing us what’s going on.

The following are two charts, reward_100 and steps_100, with average reward (which is in percentages) and the average length of the episode for the last 100 episodes, respectively:

Figure 3: The reward plot for the feed-forward version

The charts show us two good things:

  1. Our agent was able to figure out when to buy and sell the share to get positive reward (as we need to pay a commission of 0.1% on the open and close of the position, random actions will have -0.2% reward).

  2. Over the training time, the length of the episode increased from seven bars to 25 and still continues to grow slowly, which means that the agent is holding the share longer and longer to increase the final profit.

Unfortunately...