Book Image

Keras 2.x Projects

By : Giuseppe Ciaburro
Book Image

Keras 2.x Projects

By: Giuseppe Ciaburro

Overview of this book

Keras 2.x Projects explains how to leverage the power of Keras to build and train state-of-the-art deep learning models through a series of practical projects that look at a range of real-world application areas. To begin with, you will quickly set up a deep learning environment by installing the Keras library. Through each of the projects, you will explore and learn the advanced concepts of deep learning and will learn how to compute and run your deep learning models using the advanced offerings of Keras. You will train fully-connected multilayer networks, convolutional neural networks, recurrent neural networks, autoencoders and generative adversarial networks using real-world training datasets. The projects you will undertake are all based on real-world scenarios of all complexity levels, covering topics such as language recognition, stock volatility, energy consumption prediction, faster object classification for self-driving vehicles, and more. By the end of this book, you will be well versed with deep learning and its implementation with Keras. You will have all the knowledge you need to train your own deep learning models to solve different kinds of problems.
Table of Contents (13 chapters)

Inverse reinforcement learning

In Chapter 9, Robot Control System Using Deep Reinforcement Learning, we addressed the amazing world of the reinforcement learning. Reinforcement learning aims to create algorithms that can learn and adapt to environmental changes. This programming technique is based on the concept of receiving external stimuli, the nature of which depends on the algorithm choices. A correct choice will involve a reward, while an incorrect choice will lead to a penalty. The goal of the system is to achieve the best possible rewards, of course. Often, the reward function can be difficult to define: it is not always easy to understand whether a certain action in a certain state is positive for the agent. The purpose of IRL is to identify it. In IRL, the reward function is derived from the observed behavior. As we have learned, in reinforcement learning, we use rewards...