Book Image

Python Reinforcement Learning Projects

By : Sean Saito, Yang Wenzhuo, Rajalingappaa Shanmugamani
Book Image

Python Reinforcement Learning Projects

By: Sean Saito, Yang Wenzhuo, Rajalingappaa Shanmugamani

Overview of this book

Reinforcement learning is one of the most exciting and rapidly growing fields in machine learning. This is due to the many novel algorithms developed and incredible results published in recent years. In this book, you will learn about the core concepts of RL including Q-learning, policy gradients, Monte Carlo processes, and several deep reinforcement learning algorithms. As you make your way through the book, you'll work on projects with datasets of various modalities including image, text, and video. You will gain experience in several domains, including gaming, image processing, and physical simulations. You'll explore technologies such as TensorFlow and OpenAI Gym to implement deep learning reinforcement learning algorithms that also predict stock prices, generate natural language, and even build other neural networks. By the end of this book, you will have hands-on experience with eight reinforcement learning projects, each addressing different topics and/or algorithms. We hope these practical exercises will provide you with better intuition and insight about the field of reinforcement learning and how to apply its algorithms to various problems in real life.
Table of Contents (17 chapters)
Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
Index

Markov models


The problem is set up as a reinforcement learning problem, with a trial and error method. The environment is described using state_valuesstate_values (?), and the state_values are changed by actions. The actions are determined by an algorithm, based on the current state_value, in order to achieve a particular state_value that is termed a Markov model. In an ideal case, the past state_values does have an influence on future state_values, but here, we assume that the current state_value has all of the previous state_values encoded. There are two types of state_values; one is observable, and the other is non-observable. The model has to take non-observable state_values into account, as well. That is called a Hidden Markov model.

CartPole

At each step of the cart and pole, several variables can be observed, such as the position, velocity, angle, and angular velocity. The possible state_values of the cart are moved right and left:

  1. state_values: Four dimensions of continuous values...