Book Image

TensorFlow 2 Reinforcement Learning Cookbook

By : Palanisamy P
Book Image

TensorFlow 2 Reinforcement Learning Cookbook

By: Palanisamy P

Overview of this book

With deep reinforcement learning, you can build intelligent agents, products, and services that can go beyond computer vision or perception to perform actions. TensorFlow 2.x is the latest major release of the most popular deep learning framework used to develop and train deep neural networks (DNNs). This book contains easy-to-follow recipes for leveraging TensorFlow 2.x to develop artificial intelligence applications. Starting with an introduction to the fundamentals of deep reinforcement learning and TensorFlow 2.x, the book covers OpenAI Gym, model-based RL, model-free RL, and how to develop basic agents. You'll discover how to implement advanced deep reinforcement learning algorithms such as actor-critic, deep deterministic policy gradients, deep-Q networks, proximal policy optimization, and deep recurrent Q-networks for training your RL agents. As you advance, you’ll explore the applications of reinforcement learning by building cryptocurrency trading agents, stock/share trading agents, and intelligent agents for automating task completion. Finally, you'll find out how to deploy deep reinforcement learning agents to the cloud and build cross-platform apps using TensorFlow 2.x. By the end of this TensorFlow book, you'll have gained a solid understanding of deep reinforcement learning algorithms and their implementations from scratch.
Table of Contents (11 chapters)

Large-scale Deep RL agent training using Ray, Tune, and RLLib

In the previous recipe, we got a flavor of how to implement distributed RL agent training routines from scratch. Since most of the components used as building blocks have become a standard way of building Deep RL training infrastructure, we can leverage an existing library that maintains a high-quality implementation of such building blocks. Fortunately, with our choice of ray as the framework for distributed computing, we are in a good place. Tune and RLLib are two libraries built on top of ray, and are available together with Ray, that provide highly scalable hyperparameter tuning (Tune) and RL training (RLLib). This recipe will provide a curated set of steps to get you acquainted with ray, Tune, and RLLib so that you can utilize them to scale your Deep RL training routines. In addition to the recipe discussed here in the text, the cookbook’s code repository for this chapter will have a handful of additional recipes...