Book Image

Python Image Processing Cookbook

By : Sandipan Dey
Book Image

Python Image Processing Cookbook

By: Sandipan Dey

Overview of this book

With the advancements in wireless devices and mobile technology, there's increasing demand for people with digital image processing skills in order to extract useful information from the ever-growing volume of images. This book provides comprehensive coverage of the relevant tools and algorithms, and guides you through analysis and visualization for image processing. With the help of over 60 cutting-edge recipes, you'll address common challenges in image processing and learn how to perform complex tasks such as object detection, image segmentation, and image reconstruction using large hybrid datasets. Dedicated sections will also take you through implementing various image enhancement and image restoration techniques, such as cartooning, gradient blending, and sparse dictionary learning. As you advance, you'll get to grips with face morphing and image segmentation techniques. With an emphasis on practical solutions, this book will help you apply deep learning techniques such as transfer learning and fine-tuning to solve real-world problems. By the end of this book, you'll be proficient in utilizing the capabilities of the Python ecosystem to implement various image processing techniques effectively.
Table of Contents (11 chapters)

Using a variational autoencoder to reconstruct and generate images

A variational autoencoder (VAE) is a generative model that uses Bayesian inference and tries to model the underlying probability distribution of images so that it can sample new images from that distribution. Just like an ordinary autoencoder, it's composed of two components: an encoder (a bunch of layers that will compress the input to the bottleneck in a vanilla autoencoder) and a decoder (a bunch of layers that will reconstruct the input from its compressed representation from the bottleneck in a vanilla autoencoder). The difference between a VAE and an ordinary autoencoder is that instead of mapping an input layer to a latent variable, known as the bottleneck vector, the encoder maps the input to a distribution. The random samples are then drawn from the distribution and fed to the decoder.

Since it cannot...