Book Image

PyTorch 1.x Reinforcement Learning Cookbook

By : Yuxi (Hayden) Liu
Book Image

PyTorch 1.x Reinforcement Learning Cookbook

By: Yuxi (Hayden) Liu

Overview of this book

Reinforcement learning (RL) is a branch of machine learning that has gained popularity in recent times. It allows you to train AI models that learn from their own actions and optimize their behavior. PyTorch has also emerged as the preferred tool for training RL models because of its efficiency and ease of use. With this book, you'll explore the important RL concepts and the implementation of algorithms in PyTorch 1.x. The recipes in the book, along with real-world examples, will help you master various RL techniques, such as dynamic programming, Monte Carlo simulations, temporal difference, and Q-learning. You'll also gain insights into industry-specific applications of these techniques. Later chapters will guide you through solving problems such as the multi-armed bandit problem and the cartpole problem using the multi-armed bandit algorithm and function approximation. You'll also learn how to use Deep Q-Networks to complete Atari games, along with how to effectively implement policy gradients. Finally, you'll discover how RL techniques are applied to Blackjack, Gridworld environments, internet advertising, and the Flappy Bird game. By the end of this book, you'll have developed the skills you need to implement popular RL algorithms and use RL techniques to solve real-world problems.
Table of Contents (11 chapters)

Setting up the continuous Mountain Car environment

So far, the environments we have worked on have discrete action values, such as 0 or 1, representing up or down, left or right. In this recipe, we will experience a Mountain Car environment with continuous actions.

Continuous Mountain Car (https://github.com/openai/gym/wiki/MountainCarContinuous-v0) is a Mountain Car environment with continuous actions whose value is from -1 to 1. As shown in the following screenshot, its goal is to get the car to the top of the hill on the right-hand side:

In a one-dimensional track, the car is positioned between -1.2 (leftmost) and 0.6 (rightmost), and the goal (yellow flag) is located at 0.5. The engine of the car is not strong enough to drive it to the top in a single pass, so it has to drive back and forth to build up momentum. Hence, the action is a float that represents the force of pushing...