Book Image

Deep Reinforcement Learning with Python - Second Edition

By : Sudharsan Ravichandiran
Book Image

Deep Reinforcement Learning with Python - Second Edition

By: Sudharsan Ravichandiran

Overview of this book

With significant enhancements in the quality and quantity of algorithms in recent years, this second edition of Hands-On Reinforcement Learning with Python has been revamped into an example-rich guide to learning state-of-the-art reinforcement learning (RL) and deep RL algorithms with TensorFlow 2 and the OpenAI Gym toolkit. In addition to exploring RL basics and foundational concepts such as Bellman equation, Markov decision processes, and dynamic programming algorithms, this second edition dives deep into the full spectrum of value-based, policy-based, and actor-critic RL methods. It explores state-of-the-art algorithms such as DQN, TRPO, PPO and ACKTR, DDPG, TD3, and SAC in depth, demystifying the underlying math and demonstrating implementations through simple code examples. The book has several new chapters dedicated to new RL techniques, including distributional RL, imitation learning, inverse RL, and meta RL. You will learn to leverage stable baselines, an improvement of OpenAI’s baseline library, to effortlessly implement popular RL algorithms. The book concludes with an overview of promising approaches such as meta-learning and imagination augmented agents in research. By the end, you will become skilled in effectively employing RL and deep RL in your real-world projects.
Table of Contents (22 chapters)
18
Other Books You May Enjoy
19
Index

Chapter 7 – Deep Learning Foundations

  1. The activation function is used to introduce non-linearity to neural networks.
  2. The softmax function is basically a generalization of the sigmoid function. It is usually applied to the final layer of the network and while performing multi-class classification tasks. It gives the probabilities of each class being output and thus, the sum of softmax values will always equal 1.
  3. The epoch specifies the number of times the neural network sees our whole training data. So, we can say one epoch is equal to one forward pass and one backward pass for all training samples.
  4. RNNs are widely applied for use cases that involve sequential data, such as time series, text, audio, speech, video, weather, and much more. They have been greatly used in various Natural Language Processing (NLP) tasks, such as language translation, sentiment analysis, text generation, and so on.
  5. While backpropagating the RNN, we multiply the weights...