Book Image

Hands-On Q-Learning with Python

By : Nazia Habib
Book Image

Hands-On Q-Learning with Python

By: Nazia Habib

Overview of this book

Q-learning is a machine learning algorithm used to solve optimization problems in artificial intelligence (AI). It is one of the most popular fields of study among AI researchers. This book starts off by introducing you to reinforcement learning and Q-learning, in addition to helping you become familiar with OpenAI Gym as well as libraries such as Keras and TensorFlow. A few chapters into the book, you will gain insights into model-free Q-learning and use deep Q-networks and double deep Q-networks to solve complex problems. This book will guide you in exploring use cases such as self-driving vehicles and OpenAI Gym’s CartPole problem. You will also learn how to tune and optimize Q-networks and their hyperparameters. As you progress, you will understand the reinforcement learning approach to solving real-world problems. You will also explore how to use Q-learning and related algorithms in scientific research. Toward the end, you’ll gain insight into what’s in store for reinforcement learning. By the end of this book, you will be equipped with the skills you need to solve reinforcement learning problems using Q-learning algorithms with OpenAI Gym, Keras, and TensorFlow.
Table of Contents (14 chapters)
Free Chapter
1
Section 1: Q-Learning: A Roadmap
6
Section 2: Building and Optimizing Q-Learning Agents
9
Section 3: Advanced Q-Learning Challenges with Keras, TensorFlow, and OpenAI Gym

To get the most out of this book

You should have a Python development environment available and be comfortable with programming Python at least to an intermediate level.

You should have some knowledge of descriptive statistics, linear algebra, and probability theory. If you are a data science or machine learning practitioner, you have the ideal background for approaching the problems in this book.

You will need to be able to run Python 3.5+ in order to use OpenAI Gym. You can either install Gym from pip or clone the Gym repository itself.

Download the example code files

You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packt.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

  1. Log in or register at www.packt.com.
  2. Select the SUPPORT tab.
  3. Click on Code Downloads & Errata.
  4. Enter the name of the book in the Search box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

  • WinRAR/7-Zip for Windows
  • Zipeg/iZip/UnRarX for Mac
  • 7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Hands-On-Q-Learning-with-Python. In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

Conventions used

There are a number of text conventions used throughout this book.

CodeInText: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: "This is an assignment where we are setting the value of Q[state, action]."

A block of code is set as follows:

import gym
import numpy as np
env = gym.make('Taxi-v2')
state = env.reset()

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

import gym
import numpy as np
env = gym.make('Taxi-v2')
state = env.reset()

Any command-line input or output is written as follows:

pip install gym 

Bold: Indicates a new term, an important word, or words that you see on screen. For example: "The two major model-free RL algorithms are called Q-learning and State-Action-Reward-State-Action (SARSA)."

Warnings or important notes appear like this.
Tips and tricks appear like this.