Book Image

Hands-On Q-Learning with Python

By : Nazia Habib
Book Image

Hands-On Q-Learning with Python

By: Nazia Habib

Overview of this book

Q-learning is a machine learning algorithm used to solve optimization problems in artificial intelligence (AI). It is one of the most popular fields of study among AI researchers. This book starts off by introducing you to reinforcement learning and Q-learning, in addition to helping you become familiar with OpenAI Gym as well as libraries such as Keras and TensorFlow. A few chapters into the book, you will gain insights into model-free Q-learning and use deep Q-networks and double deep Q-networks to solve complex problems. This book will guide you in exploring use cases such as self-driving vehicles and OpenAI Gym’s CartPole problem. You will also learn how to tune and optimize Q-networks and their hyperparameters. As you progress, you will understand the reinforcement learning approach to solving real-world problems. You will also explore how to use Q-learning and related algorithms in scientific research. Toward the end, you’ll gain insight into what’s in store for reinforcement learning. By the end of this book, you will be equipped with the skills you need to solve reinforcement learning problems using Q-learning algorithms with OpenAI Gym, Keras, and TensorFlow.
Table of Contents (14 chapters)
Free Chapter
1
Section 1: Q-Learning: A Roadmap
6
Section 2: Building and Optimizing Q-Learning Agents
9
Section 3: Advanced Q-Learning Challenges with Keras, TensorFlow, and OpenAI Gym

Implementing your agent

Let's recreate the Taxi-v2 environment. We'll need to import numpy this time. We'll be using the term state instead of observation in this chapter for consistency with the terminology we used in Chapter 1, Brushing Up on Reinforcement Learning Concepts:

import gym
import numpy as np
env = gym.make('Taxi-v2')
state = env.reset()

Create the Q-table as follows:

Q = np.zeros([env.observation_space.n, env.action_space.n])

The Q-table is initialized as a two-dimensional numpy array of zeroes. The first three rows of the Q-table currently look like this:

State South(0) North(1) East(2) West(3) Pickup(4) Dropoff(5)
0 0 0 0 0 0 0
1 0 0 0 0 0 0
2 0 0 0 0 0 0

The first column represents the state, and the other column names represent the six possible actions. The Q-values for of all the state-action pairs are currently at zero....