#### Overview of this book

Various intelligent applications such as video games, inventory management software, warehouse robots, and translation tools use reinforcement learning (RL) to make decisions and perform actions that maximize the probability of the desired outcome. This book will help you to get to grips with the techniques and the algorithms for implementing RL in your machine learning models. Starting with an introduction to RL, youÔÇÖll be guided through different RL environments and frameworks. YouÔÇÖll learn how to implement your own custom environments and use OpenAI baselines to run RL algorithms. Once youÔÇÖve explored classic RL techniques such as Dynamic Programming, Monte Carlo, and TD Learning, youÔÇÖll understand when to apply the different deep learning methods in RL and advance to deep Q-learning. The book will even help you understand the different stages of machine-based problem-solving by using DARQN on a popular video game Breakout. Finally, youÔÇÖll find out when to use a policy-based method to tackle an RL problem. By the end of The Reinforcement Learning Workshop, youÔÇÖll be equipped with the knowledge and skills needed to solve challenging problems using reinforcement learning.
Preface
1. Introduction to Reinforcement Learning
Free Chapter
2. Markov Decision Processes and Bellman Equations
3. Deep Learning in Practice with TensorFlow 2
4. Getting Started with OpenAI and TensorFlow for Reinforcement Learning
5. Dynamic Programming
6. Monte Carlo Methods
7. Temporal Difference Learning
8. The Multi-Armed Bandit Problem
9. What Is Deep Q-Learning?
10. Playing an Atari Game with Deep Recurrent Q-Networks
11. Policy-Based Methods for Reinforcement Learning
12. Evolutionary Strategies for RL

# The Action-Value Function (Q Value Function)

In the previous sections, we learned about the state-value function, which tells us how rewarding it is to be in a particular state for an agent. Now we will learn about another function where we can combine the state with actions. The action-value function will tell us how good it is for the agent to take any given action from a given state. We also call the action value the Q value. The equation can be written as follows:

Figure 9.13: Expression for the Q value function

The preceding equation can be written in an iterative fashion, as follows:

Figure 9.14: Expression for the Q value function with iterations

This equation is also known as the bellman equation. From the equation, we can express . A Bellman equation can be described as follows:

"The total expected reward being in state s and taking action a is the sum of two components: the reward (which is r) that we can...