-
Book Overview & Buying
-
Table Of Contents
Hands-On Artificial Intelligence for IoT - Second Edition
By :
In this chapter, we learned about RL and how it’s different from supervised and unsupervised learning. The emphasis of this chapter was on DRL, where deep neural networks are used to approximate the policy function, the value function, or even both. This chapter introduced OpenAI Gym, a library that provides a large number of environments to train RL agents. We learned about value-based methods such as Q- learning and used it to train an agent to pick up and drop passengers off in a taxi. We also used a DQN to train an agent to play an Atari game. This chapter then moved on to policy-based methods, specifically policy gradients. We covered the intuition behind policy gradients and used the algorithm to train a bipedal robot to walk using DDPG.
In the next chapter, we’ll explore generative models and learn the secrets behind generative adversarial networks.
Join our community’s Discord space for discussions with the...