Book Image

Numerical Computing with Python

By : Pratap Dangeti, Allen Yu, Claire Chung, Aldrin Yim, Theodore Petrou
Book Image

Numerical Computing with Python

By: Pratap Dangeti, Allen Yu, Claire Chung, Aldrin Yim, Theodore Petrou

Overview of this book

Data mining, or parsing the data to extract useful insights, is a niche skill that can transform your career as a data scientist Python is a flexible programming language that is equipped with a strong suite of libraries and toolkits, and gives you the perfect platform to sift through your data and mine the insights you seek. This Learning Path is designed to familiarize you with the Python libraries and the underlying statistics that you need to get comfortable with data mining. You will learn how to use Pandas, Python's popular library to analyze different kinds of data, and leverage the power of Matplotlib to generate appealing and impressive visualizations for the insights you have derived. You will also explore different machine learning techniques and statistics that enable you to build powerful predictive models. By the end of this Learning Path, you will have the perfect foundation to take your data mining skills to the next level and set yourself on the path to become a sought-after data science professional. This Learning Path includes content from the following Packt products: • Statistics for Machine Learning by Pratap Dangeti • Matplotlib 2.x By Example by Allen Yu, Claire Chung, Aldrin Yim • Pandas Cookbook by Theodore Petrou
Table of Contents (21 chapters)
Title Page
Contributors
About Packt
Preface
Index

Temporal difference learning


Temporal Difference (TD) learning is the central and novel theme of reinforcement learning. TD learning is the combination of both Monte Carlo (MC) and Dynamic Programming (DP) ideas. Like Monte Carlo methods, TD methods can learn directly from the experiences without the model of the environment. Similar to Dynamic Programming, TD methods update estimates based in part on other learned estimates, without waiting for a final outcome, unlike MC methods, in which estimates are updated after reaching the final outcome only.

TD prediction

Both TD and MC use experience to solve z prediction problem. Given some policy π, both methods update their estimate v of vπ  for the non-terminal states St occurring in that experience. Monte Carlo methods wait until the return following the visit is known, then use that return as a target for V(St).

The preceding method can be called as a constant - α MC, where MC must wait until the end of the episode to determine the increment...