Book Image

PyTorch 1.x Reinforcement Learning Cookbook

By : Yuxi (Hayden) Liu
Book Image

PyTorch 1.x Reinforcement Learning Cookbook

By: Yuxi (Hayden) Liu

Overview of this book

Reinforcement learning (RL) is a branch of machine learning that has gained popularity in recent times. It allows you to train AI models that learn from their own actions and optimize their behavior. PyTorch has also emerged as the preferred tool for training RL models because of its efficiency and ease of use. With this book, you'll explore the important RL concepts and the implementation of algorithms in PyTorch 1.x. The recipes in the book, along with real-world examples, will help you master various RL techniques, such as dynamic programming, Monte Carlo simulations, temporal difference, and Q-learning. You'll also gain insights into industry-specific applications of these techniques. Later chapters will guide you through solving problems such as the multi-armed bandit problem and the cartpole problem using the multi-armed bandit algorithm and function approximation. You'll also learn how to use Deep Q-Networks to complete Atari games, along with how to effectively implement policy gradients. Finally, you'll discover how RL techniques are applied to Blackjack, Gridworld environments, internet advertising, and the Flappy Bird game. By the end of this book, you'll have developed the skills you need to implement popular RL algorithms and use RL techniques to solve real-world problems.
Table of Contents (11 chapters)

Developing MC control with weighted importance sampling

In the previous recipe, we simply averaged the returns from the behavior policy with importance ratios of their probabilities in the target policy. This technique is formally called ordinary importance sampling. It is known to have high variance and, therefore, we usually prefer the weighted version of importance sampling, which we will talk about in this recipe.

Weighted importance sampling differs from ordinary importance sampling in the way it averages returns. Instead of simply averaging, it takes the weighted average of the returns:

It often has a much lower variance compared to the ordinary version. If you have experimented with ordinary importance sampling for Blackjack, you will find the results vary a lot in each experiment.

...