Book Image

Hands-On Artificial Intelligence for IoT - Second Edition

By : Amita Kapoor
Book Image

Hands-On Artificial Intelligence for IoT - Second Edition

By: Amita Kapoor

Overview of this book

There are many applications that use data science and analytics to gain insights from terabytes of data. These apps, however, do not address the challenge of continually discovering patterns for IoT data. In Hands-On Artificial Intelligence for IoT, we cover various aspects of artificial intelligence (AI) and its implementation to make your IoT solutions smarter. This book starts by covering the process of gathering and preprocessing IoT data gathered from distributed sources. You will learn different AI techniques such as machine learning, deep learning, reinforcement learning, and natural language processing to build smart IoT systems. You will also leverage the power of AI to handle real-time data coming from wearable devices. As you progress through the book, techniques for building models that work with different kinds of data generated and consumed by IoT devices such as time series, images, and audio will be covered. Useful case studies on four major application areas of IoT solutions are a key focal point of this book. In the concluding chapters, you will leverage the power of widely used Python libraries, TensorFlow and Keras, to build different kinds of smart AI models. By the end of this book, you will be able to build smart AI-powered IoT apps with confidence.
Table of Contents (20 chapters)
Title Page
Copyright and Credits
Dedication
About Packt
Contributors
Preface
Index

Generating images using VAEs


From Chapter 4, Deep Learning for IOT, you should be familiar with autoencoders and their functions. VAEs are a type of autoencoder; here, we retain the (trained) Decoder part, which can be used by feeding random latent features z to generate data similar to the training data. Now, if you remember, in autoencoders, the Encoder results in the generation of low-dimensional features, z:

The architecture of autoencoders

The VAEs are concerned with finding the likelihood function p(x) from the latent features z

This is an intractable density function, and it isn't possible to directly optimize it; instead, we obtain a lower bound by using a simple Gaussian prior p(z) and making both Encoder and Decoder networks probabilistic:

 Architecture of a VAE

This allows us to define a tractable lower bound on the log likelihood, given by the following:

In the preceding, θ represents the decoder network parameters and φ the encoder network parameters. The network is trained by maximizing...