Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Book Overview & Buying The Reinforcement Learning Workshop
  • Table Of Contents Toc
The Reinforcement Learning Workshop

The Reinforcement Learning Workshop

By : Alessandro Palmas , Emanuele Ghelfi , Dr. Alexandra Galina Petre , Mayur Kulkarni , Anand N.S. , Quan Nguyen , Aritra Sen , Anthony So , Saikat Basak
4.7 (7)
close
close
The Reinforcement Learning Workshop

The Reinforcement Learning Workshop

4.7 (7)
By: Alessandro Palmas , Emanuele Ghelfi , Dr. Alexandra Galina Petre , Mayur Kulkarni , Anand N.S. , Quan Nguyen , Aritra Sen , Anthony So , Saikat Basak

Overview of this book

Various intelligent applications such as video games, inventory management software, warehouse robots, and translation tools use reinforcement learning (RL) to make decisions and perform actions that maximize the probability of the desired outcome. This book will help you to get to grips with the techniques and the algorithms for implementing RL in your machine learning models. Starting with an introduction to RL, youÔÇÖll be guided through different RL environments and frameworks. YouÔÇÖll learn how to implement your own custom environments and use OpenAI baselines to run RL algorithms. Once youÔÇÖve explored classic RL techniques such as Dynamic Programming, Monte Carlo, and TD Learning, youÔÇÖll understand when to apply the different deep learning methods in RL and advance to deep Q-learning. The book will even help you understand the different stages of machine-based problem-solving by using DARQN on a popular video game Breakout. Finally, youÔÇÖll find out when to use a policy-based method to tackle an RL problem. By the end of The Reinforcement Learning Workshop, youÔÇÖll be equipped with the knowledge and skills needed to solve challenging problems using reinforcement learning.
Table of Contents (14 chapters)
close
close
Preface
Lock Free Chapter
2
2. Markov Decision Processes and Bellman Equations

The UCB algorithm

The term upper confidence bound denotes the fact that instead of considering the average of past rewards returned from each arm like Greedy, the algorithm computes an upper bound for its estimates of the expected reward for each arm.

This concept of a confidence bound is quite common in probability and statistics, where the distribution of a quantity that we care about (in this case, the reward from each arm) cannot be represented well using simply the average of past observations. Instead, a confidence bound is a numerical range that aims to estimate and narrow down where most of the values in the distribution in question will lie. For example, this idea is widely used in Bayesian analyses and Bayesian optimization.

In the following section, we will discuss how UCB establishes its use of a confidence bound.

Optimism in the Face of Uncertainty

Consider the middle of the process of a bandit with only two arms. We have already pulled the first arm 100 times...

CONTINUE READING
83
Tech Concepts
36
Programming languages
73
Tech Tools
Icon Unlimited access to the largest independent learning library in tech of over 8,000 expert-authored tech books and videos.
Icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Icon 50+ new titles added per month and exclusive early access to books as they are being written.
The Reinforcement Learning Workshop
notes
bookmark Notes and Bookmarks search Search in title playlist Add to playlist download Download options font-size Font size

Change the font size

margin-width Margin width

Change margin width

day-mode Day/Sepia/Night Modes

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Confirmation

Modal Close icon
claim successful

Buy this book with your credits?

Modal Close icon
Are you sure you want to buy this book with one of your credits?
Close
YES, BUY

Submit Your Feedback

Modal Close icon
Modal Close icon
Modal Close icon