Book Image

Hands-On Deep Learning for IoT

By : Dr. Mohammad Abdur Razzaque, Md. Rezaul Karim
Book Image

Hands-On Deep Learning for IoT

By: Dr. Mohammad Abdur Razzaque, Md. Rezaul Karim

Overview of this book

Artificial Intelligence is growing quickly, which is driven by advancements in neural networks(NN) and deep learning (DL). With an increase in investments in smart cities, smart healthcare, and industrial Internet of Things (IoT), commercialization of IoT will soon be at peak in which massive amounts of data generated by IoT devices need to be processed at scale. Hands-On Deep Learning for IoT will provide deeper insights into IoT data, which will start by introducing how DL fits into the context of making IoT applications smarter. It then covers how to build deep architectures using TensorFlow, Keras, and Chainer for IoT. You’ll learn how to train convolutional neural networks(CNN) to develop applications for image-based road faults detection and smart garbage separation, followed by implementing voice-initiated smart light control and home access mechanisms powered by recurrent neural networks(RNN). You’ll master IoT applications for indoor localization, predictive maintenance, and locating equipment in a large hospital using autoencoders, DeepFi, and LSTM networks. Furthermore, you’ll learn IoT application development for healthcare with IoT security enhanced. By the end of this book, you will have sufficient knowledge need to use deep learning efficiently to power your IoT-based applications for smarter decision making.
Table of Contents (15 chapters)
Free Chapter
1
Section 1: IoT Ecosystems, Deep Learning Techniques, and Frameworks
4
Section 2: Hands-On Deep Learning Application Development for IoT
10
Section 3: Advanced Aspects and Analytics in IoT

Model evaluation

We can evaluate three different aspects of the models:

  • Learning/(re)training time
  • Storage requirement
  • Performance (accuracy)

On a desktop (Intel Xenon CPU E5-1650 [email protected] and 32 GB RAM) with GPU support, the training of LSTM on the CPU-utilization dataset and the autoencoder on the KDD layered wise dataset (reduced dataset) took a few minutes. The DNN model on the overall dataset took a little over an hour, which was expected as it has been trained on a larger dataset (KDD's overall 10% dataset).

The storage requirement of a model is an essential consideration in resource-constrained IoT devices. The following screenshot presents the storage requirements for the three models we tested for the two use cases:

As shown in the screenshot, autoencoders took storage in the range of KB. The final version of a stored autoencoder model took only 85 KB, LSTM...