Book Image

The Reinforcement Learning Workshop

By : Alessandro Palmas, Emanuele Ghelfi, Dr. Alexandra Galina Petre, Mayur Kulkarni, Anand N.S., Quan Nguyen, Aritra Sen, Anthony So, Saikat Basak
Book Image

The Reinforcement Learning Workshop

By: Alessandro Palmas, Emanuele Ghelfi, Dr. Alexandra Galina Petre, Mayur Kulkarni, Anand N.S., Quan Nguyen, Aritra Sen, Anthony So, Saikat Basak

Overview of this book

Various intelligent applications such as video games, inventory management software, warehouse robots, and translation tools use reinforcement learning (RL) to make decisions and perform actions that maximize the probability of the desired outcome. This book will help you to get to grips with the techniques and the algorithms for implementing RL in your machine learning models. Starting with an introduction to RL, youÔÇÖll be guided through different RL environments and frameworks. YouÔÇÖll learn how to implement your own custom environments and use OpenAI baselines to run RL algorithms. Once youÔÇÖve explored classic RL techniques such as Dynamic Programming, Monte Carlo, and TD Learning, youÔÇÖll understand when to apply the different deep learning methods in RL and advance to deep Q-learning. The book will even help you understand the different stages of machine-based problem-solving by using DARQN on a popular video game Breakout. Finally, youÔÇÖll find out when to use a policy-based method to tackle an RL problem. By the end of The Reinforcement Learning Workshop, youÔÇÖll be equipped with the knowledge and skills needed to solve challenging problems using reinforcement learning.
Table of Contents (14 chapters)
Preface
Free Chapter
2
2. Markov Decision Processes and Bellman Equations

7. Temporal Difference Learning

Activity 7.01: Using TD(0) Q-Learning to Solve FrozenLake-v0 Stochastic Transitions

  1. Import the required modules:
    import numpy as np
    import matplotlib.pyplot as plt
    %matplotlib inline
    import gym
  2. Instantiate the gym environment called FrozenLake-v0 using the is_slippery flag set to True in order to enable stochasticity:
    env = gym.make('FrozenLake-v0', is_slippery=True)
  3. Take a look at the action and observation spaces:
    print("Action space = ", env.action_space)
    print("Observation space = ", env.observation_space)

    This will print out the following:

    Action space =  Discrete(4)
    Observation space =  Discrete(16)
  4. Create two dictionaries to easily translate the actions numbers into moves:
    actionsDict = {}
    actionsDict[0] = " L "
    actionsDict[1] = " D "
    actionsDict[2] = " R "
    actionsDict[3] = " U "
    actionsDictInv = {}
    actionsDictInv["L"] = 0
    actionsDictInv["D&quot...