Book Image

TensorFlow 2 Reinforcement Learning Cookbook

By : Palanisamy P
Book Image

TensorFlow 2 Reinforcement Learning Cookbook

By: Palanisamy P

Overview of this book

With deep reinforcement learning, you can build intelligent agents, products, and services that can go beyond computer vision or perception to perform actions. TensorFlow 2.x is the latest major release of the most popular deep learning framework used to develop and train deep neural networks (DNNs). This book contains easy-to-follow recipes for leveraging TensorFlow 2.x to develop artificial intelligence applications. Starting with an introduction to the fundamentals of deep reinforcement learning and TensorFlow 2.x, the book covers OpenAI Gym, model-based RL, model-free RL, and how to develop basic agents. You'll discover how to implement advanced deep reinforcement learning algorithms such as actor-critic, deep deterministic policy gradients, deep-Q networks, proximal policy optimization, and deep recurrent Q-networks for training your RL agents. As you advance, you’ll explore the applications of reinforcement learning by building cryptocurrency trading agents, stock/share trading agents, and intelligent agents for automating task completion. Finally, you'll find out how to deploy deep reinforcement learning agents to the cloud and build cross-platform apps using TensorFlow 2.x. By the end of this TensorFlow book, you'll have gained a solid understanding of deep reinforcement learning algorithms and their implementations from scratch.
Table of Contents (11 chapters)

What this book covers

Chapter 1, Developing Building Blocks for Deep Reinforcement Learning Using TensorFlow 2.x, provides recipes for getting started with RL environments, deep neural network-based RL agents, evolutionary neural agents, and other building blocks for both discrete and continuous action-space RL applications.

Chapter 2, Implementing Value-Based Policy Gradients and Actor-Critic Deep RL Algorithms, includes recipes for implementing value iteration-based learning agents and breaks down the implementation of several foundational algorithms in RL, such as Monte-Carlo control, SARSA and Q-learning, actor-critic, and policy gradient algorithms into simple steps.

Chapter 3, Implementing Advanced RL Algorithms, provides concise recipes to implement complete agent training systems using Deep Q-Network (DQN), Double and Dueling Deep Q-Network (DDQN, DDDQN), Deep Recurrent Q-Network (DRQN), Asynchronous Advantage Actor-Critic (A3C), Proximal Policy Optimization (PPO), and Deep Deterministic Policy Gradient (DDPG) RL algorithms.

Chapter 4, RL in the Real World Building Cryptocurrency Trading Agents, shows how to implement and train a soft actor-critic agent in custom RL environments for bitcoin and ether trading using real market data from trading exchanges such as Gemini, containing both tabular and visual (image) state/observation and discrete and continuous action spaces.

Chapter 5, RL in the Real World Building Stock/Share Trading Agents, covers how to train advanced RL agents to trade for profit in the stock market using visual price charts and/or tabular ticket data and more in custom RL environments powered by real stock market exchange data.

Chapter 6, RL in the Real World Building Intelligent Agents to Complete Your To-Dos, provides recipes to build, train, and test vision-based RL agents for completing tasks on the web to help you automate tasks such as clicking on pop-up/confirmation dialogs on web pages, logging into various websites, finding and booking the cheapest flight tickets for your travel, decluttering your email inbox, and like/share/retweeting posts on social media sites to engage with your followers.

Chapter 7, Deploying Deep RL Agents to the Cloud, contains recipes to equip you with tools and details to get ahead of the curve and build cloud-based Simulation-as-a-Service and Agent/Bot-as-a-Service programs using deep RL. Learn how to train RL agents using remote simulators running on the cloud, package runtime components of RL agents, and deploy deep RL agents to the cloud by deploying your own trading bot-as-a-service.

Chapter 8, Distributed Training for the Accelerated Development of Deep RL Agents, contains recipes to speed up deep RL agent development using the distributed training of deep neural network models by leveraging TensorFlow 2.x's capabilities. Learn how to utilize multiple CPUs and GPUs both on a single machine as well as on a cluster of machines to scale up/out your deep RL agent training and also learn how to leverage Ray, Tune, and RLLib for large-scale accelerated training.

Chapter 9, Deploying Deep RL Agents on Multiple Platforms, provides customizable templates that you can utilize for building and deploying your own deep RL applications for your use cases. Learn how to export RL agent models for serving/deployment in various production-ready formats, such as TensorFlow Lite, TensorFlow.js, and ONNX, and learn how to leverage NVIDIA Triton or build your own solution to launch production-ready, RL-based AI services. You will also deploy an RL agent in a mobile and web app and learn how to deploy RL bots in your Node.js applications.