#### Overview of this book

Reinforcement learning (RL) is a branch of machine learning that has gained popularity in recent times. It allows you to train AI models that learn from their own actions and optimize their behavior. PyTorch has also emerged as the preferred tool for training RL models because of its efficiency and ease of use. With this book, you'll explore the important RL concepts and the implementation of algorithms in PyTorch 1.x. The recipes in the book, along with real-world examples, will help you master various RL techniques, such as dynamic programming, Monte Carlo simulations, temporal difference, and Q-learning. You'll also gain insights into industry-specific applications of these techniques. Later chapters will guide you through solving problems such as the multi-armed bandit problem and the cartpole problem using the multi-armed bandit algorithm and function approximation. You'll also learn how to use Deep Q-Networks to complete Atari games, along with how to effectively implement policy gradients. Finally, you'll discover how RL techniques are applied to Blackjack, Gridworld environments, internet advertising, and the Flappy Bird game. By the end of this book, you'll have developed the skills you need to implement popular RL algorithms and use RL techniques to solve real-world problems.
Preface
Free Chapter
Getting Started with Reinforcement Learning and PyTorch
Markov Decision Processes and Dynamic Programming
Monte Carlo Methods for Making Numerical Estimations
Capstone Project – Playing Flappy Bird with DQN
Other Books You May Enjoy

# Implementing the actor-critic algorithm

In the REINFORCE with baseline algorithm, there are two separate components, the policy model and the value function. We can actually combine the learning of these two components, since the goal of learning the value function is to update the policy network. This is what the actor-critic algorithm does, and which we are going to develop in this recipe.

The network for the actor-critic algorithm consists of the following two parts:

• Actor: This takes in the input state and outputs the action probabilities. Essentially, it learns the optimal policy by updating the model using information provided by the critic.
• Critic: This evaluates how good it is to be at the input state by computing the value function. The value guides the actor on how it should adjust.

These two components share parameters of input and hidden layers in the network, as...