Book Image

Deep Reinforcement Learning Hands-On

By : Maxim Lapan
Book Image

Deep Reinforcement Learning Hands-On

By: Maxim Lapan

Overview of this book

Deep Reinforcement Learning Hands-On is a comprehensive guide to the very latest DL tools and their limitations. You will evaluate methods including Cross-entropy and policy gradients, before applying them to real-world environments. Take on both the Atari set of virtual games and family favorites such as Connect4. The book provides an introduction to the basics of RL, giving you the know-how to code intelligent learning agents to take on a formidable array of practical tasks. Discover how to implement Q-learning on 'grid world' environments, teach your agent to buy and trade stocks, and find out how natural language models are driving the boom in chatbots.
Table of Contents (23 chapters)
Deep Reinforcement Learning Hands-On
Contributors
Preface
Other Books You May Enjoy
Index

Adding text description


As the last example of this chapter, we'll add text description of the problem into observations of our model. We've already mentioned that some problems contain vital information given in a text description, like the index of tabs needed to be clicked or list of entries that the agent needs to check. The same information is shown on the top of the image observation, but pixels is not always the best representation of a simple text.

To take this text into account, we need to extend our model's input from an image only to an image and text data. We have worked with text in the previous chapter, so a Recurrent Neural Network (RNN) is quite an obvious choice (maybe not the best for such a toy problem but it is flexible and scalable). We are not going to cover this example in detail but will just focus on the most important points of the implementation (the whole code is in Chapter13/wob_click_mm_train.py). In comparison to our clicker model, text extension doesn't add...