Book Image

Hands-On Artificial Intelligence for IoT - Second Edition

By : Amita Kapoor
Book Image

Hands-On Artificial Intelligence for IoT - Second Edition

By: Amita Kapoor

Overview of this book

There are many applications that use data science and analytics to gain insights from terabytes of data. These apps, however, do not address the challenge of continually discovering patterns for IoT data. In Hands-On Artificial Intelligence for IoT, we cover various aspects of artificial intelligence (AI) and its implementation to make your IoT solutions smarter. This book starts by covering the process of gathering and preprocessing IoT data gathered from distributed sources. You will learn different AI techniques such as machine learning, deep learning, reinforcement learning, and natural language processing to build smart IoT systems. You will also leverage the power of AI to handle real-time data coming from wearable devices. As you progress through the book, techniques for building models that work with different kinds of data generated and consumed by IoT devices such as time series, images, and audio will be covered. Useful case studies on four major application areas of IoT solutions are a key focal point of this book. In the concluding chapters, you will leverage the power of widely used Python libraries, TensorFlow and Keras, to build different kinds of smart AI models. By the end of this book, you will be able to build smart AI-powered IoT apps with confidence.
Table of Contents (20 chapters)
Title Page
Copyright and Credits
Dedication
About Packt
Contributors
Preface
Index

Deep learning 101


The human mind has always intrigued philosophers, scientists, and engineers alike. The desire to imitate and replicate the intelligence of the human brain by man has been written about over many years; Galatea by Pygmalion of Cyprus in Greek mythology, Golem in Jewish folklore, and Maya Sita in Hindu mythology are just a few examples. Robots with Artificial Intelligence (AI) are a favorite of (science) fiction writers since time immemorial.

 

AI, as we know today, was conceived parallel with the idea of computers. The seminal paper, A Logical Calculus Of The Ideas Immanent In Nervous Activity, in the year 1943 by McCulloch and Pitts proposed the first neural network model—the threshold devices that could perform logical operations such as AND, OR, AND-NOT. In his pioneering work, Computing Machinery and Intelligence, published in the year 1950, Alan Turing proposed a Turing test; a test to identify whether a machine has intelligence or not. Rosenblatt, in 1957, laid the base...