Book Image

Hands-On Q-Learning with Python

By : Nazia Habib
Book Image

Hands-On Q-Learning with Python

By: Nazia Habib

Overview of this book

Q-learning is a machine learning algorithm used to solve optimization problems in artificial intelligence (AI). It is one of the most popular fields of study among AI researchers. This book starts off by introducing you to reinforcement learning and Q-learning, in addition to helping you become familiar with OpenAI Gym as well as libraries such as Keras and TensorFlow. A few chapters into the book, you will gain insights into model-free Q-learning and use deep Q-networks and double deep Q-networks to solve complex problems. This book will guide you in exploring use cases such as self-driving vehicles and OpenAI Gym’s CartPole problem. You will also learn how to tune and optimize Q-networks and their hyperparameters. As you progress, you will understand the reinforcement learning approach to solving real-world problems. You will also explore how to use Q-learning and related algorithms in scientific research. Toward the end, you’ll gain insight into what’s in store for reinforcement learning. By the end of this book, you will be equipped with the skills you need to solve reinforcement learning problems using Q-learning algorithms with OpenAI Gym, Keras, and TensorFlow.
Table of Contents (14 chapters)
Free Chapter
1
Section 1: Q-Learning: A Roadmap
6
Section 2: Building and Optimizing Q-Learning Agents
9
Section 3: Advanced Q-Learning Challenges with Keras, TensorFlow, and OpenAI Gym

Questions

  1. Why do we choose to use the words state and observation interchangeably? When would be a more appropriate time to use the word state?
  2. How do we know when the Q-function has converged?
  3. What happens to the Q-table when the Q-function has converged?
  4. When do we know the agent has found the optimal path to the goal? Describe in terms of the previous two questions.
  5. What does numpy.argmax() return?
  6. What does numpy.max() return?
  7. Why does the randomly-acting agent take thousands of time steps to reach the goal? How does the Q-learning agent perform better?
  8. Describe one benefit of decaying alpha.
  9. What is overfitting and how does it apply in the context of an RL model?
  10. By what order of magnitude does the number of time steps needed to reach the goal reduce when the number of training episodes is multiplied by 10? Give a general response to this; there may be multiple valid...