Book Image

Keras Deep Learning Cookbook

By : Rajdeep Dua, Sujit Pal, Manpreet Singh Ghotra
Book Image

Keras Deep Learning Cookbook

By: Rajdeep Dua, Sujit Pal, Manpreet Singh Ghotra

Overview of this book

Keras has quickly emerged as a popular deep learning library. Written in Python, it allows you to train convolutional as well as recurrent neural networks with speed and accuracy. The Keras Deep Learning Cookbook shows you how to tackle different problems encountered while training efficient deep learning models, with the help of the popular Keras library. Starting with installing and setting up Keras, the book demonstrates how you can perform deep learning with Keras in the TensorFlow. From loading data to fitting and evaluating your model for optimal performance, you will work through a step-by-step process to tackle every possible problem faced while training deep models. You will implement convolutional and recurrent neural networks, adversarial networks, and more with the help of this handy guide. In addition to this, you will learn how to train these models for real-world image and language processing tasks. By the end of this book, you will have a practical, hands-on understanding of how you can leverage the power of Python and Keras to perform effective deep learning
Table of Contents (17 chapters)
Title Page
Copyright and Credits
Packt Upsell
Contributors
Preface
Index

The CartPole game with Keras


CartPole is one of the simpler environments in the OpenAI Gym (a game simulator). The goal of CartPole is to balance a pole connected with one joint on top of a moving cart. Instead of pixel information, there are two kinds of information given by the state: the angle of the pole and position of the cart. An agent can move the cart by performing a sequence of actions of 0 or 1 to the cart, pushing it left or right:

The OpenAI Gym makes interacting with the game environment really simple:

next_state, reward, done, info = env.step(action)

In the preceding code, an action can be either 0 or 1. If we pass those numbers, env, which is the game environment, will emit the results. The done variable is a Boolean value saying whether the game ended or not. The old state information is paired with actionnext_state, and reward is the information we need for training the agent.

How to do it...

We will be using a neural network to build the AI agent that plays Cartpole. The...