Book Image

Hands-On Intelligent Agents with OpenAI Gym

By : Palanisamy P
Book Image

Hands-On Intelligent Agents with OpenAI Gym

By: Palanisamy P

Overview of this book

Many real-world problems can be broken down into tasks that require a series of decisions to be made or actions to be taken. The ability to solve such tasks without a machine being programmed requires a machine to be artificially intelligent and capable of learning to adapt. This book is an easy-to-follow guide to implementing learning algorithms for machine software agents in order to solve discrete or continuous sequential decision making and control tasks. Hands-On Intelligent Agents with OpenAI Gym takes you through the process of building intelligent agent algorithms using deep reinforcement learning starting from the implementation of the building blocks for configuring, training, logging, visualizing, testing, and monitoring the agent. You will walk through the process of building intelligent agents from scratch to perform a variety of tasks. In the closing chapters, the book provides an overview of the latest learning environments and learning algorithms, along with pointers to more resources that will help you take your deep reinforcement learning skills to the next level.
Table of Contents (12 chapters)

Monte Carlo learning and temporal difference learning

At this point, we understand that it is very useful for an agent to learn the state value function , which informs the agent about the long-term value of being in state so that the agent can decide if it is a good state to be in or not. The Monte Carlo (MC) and Temporal Difference (TD) learning methods enable an agent to learn that!

The goal of MC and TD learning is to learn the value functions from the agent's experience as the agent follows its policy .

The following table summarizes the value estimate's update equation for the MC and TD learning methods:

Learning method State-value function
Monte Carlo
Temporal Difference

MC learning updates the value towards the actual return , which is the total discounted reward from time step t. This means that until the end. It is important to note that we...