Book Image

Keras 2.x Projects

By : Giuseppe Ciaburro
Book Image

Keras 2.x Projects

By: Giuseppe Ciaburro

Overview of this book

Keras 2.x Projects explains how to leverage the power of Keras to build and train state-of-the-art deep learning models through a series of practical projects that look at a range of real-world application areas. To begin with, you will quickly set up a deep learning environment by installing the Keras library. Through each of the projects, you will explore and learn the advanced concepts of deep learning and will learn how to compute and run your deep learning models using the advanced offerings of Keras. You will train fully-connected multilayer networks, convolutional neural networks, recurrent neural networks, autoencoders and generative adversarial networks using real-world training datasets. The projects you will undertake are all based on real-world scenarios of all complexity levels, covering topics such as language recognition, stock volatility, energy consumption prediction, faster object classification for self-driving vehicles, and more. By the end of this book, you will be well versed with deep learning and its implementation with Keras. You will have all the knowledge you need to train your own deep learning models to solve different kinds of problems.
Table of Contents (13 chapters)

Reconstruction of Handwritten Digit Images Using Autoencoders

The term handwriting recognition (HWR) refers to the ability of a computer to receive and interpret as text intelligible handwritten input from sources such as paper documents, photographs, and touchscreens. Written text can be detected on a piece of paper with optical scanning (optical character recognition (OCR)) or intelligent word recognition.

An autoencoder is a neural network, whose purpose is to code its input into small dimensions, and the result obtained helps to reconstruct the input itself. Autoencoders are made up of the union of the following two subnets: encoder and decoder. The encoder and the decoder will be differentiable with respect to the distance function, so the parameters of the encoding/decoding functions can be optimized to minimize the loss of reconstruction, using the gradient stochastic....