We started off the chapter by understanding what TD learning is and how it takes advantage of both DP and the MC method. We learned that, just like DP, TD learning bootstraps, and just like the MC method, TD learning is a model-free method.
Later, we learned how to perform a prediction task using TD learning, and then we looked into the algorithm of the TD prediction method.
Going forward, we learned how to use TD learning for a control task. First, we learned about the on-policy TD control method called SARSA, and then we learned about the off-policy TD control method called Q learning. We also learned how to find the optimal policy in the Frozen Lake environment using the SARSA and Q learning methods.
We also learned the difference between SARSA and Q learning methods. We understood that SARSA is an on-policy algorithm, meaning that we use a single epsilon-greedy policy to select an action in the environment and also to compute the Q value of the next state-action...