Book Image

Artificial Intelligence for IoT Cookbook

By : Michael Roshak
Book Image

Artificial Intelligence for IoT Cookbook

By: Michael Roshak

Overview of this book

Artificial intelligence (AI) is rapidly finding practical applications across a wide variety of industry verticals, and the Internet of Things (IoT) is one of them. Developers are looking for ways to make IoT devices smarter and to make users’ lives easier. With this AI cookbook, you’ll be able to implement smart analytics using IoT data to gain insights, predict outcomes, and make informed decisions, along with covering advanced AI techniques that facilitate analytics and learning in various IoT applications. Using a recipe-based approach, the book will take you through essential processes such as data collection, data analysis, modeling, statistics and monitoring, and deployment. You’ll use real-life datasets from smart homes, industrial IoT, and smart devices to train and evaluate simple to complex models and make predictions using trained models. Later chapters will take you through the key challenges faced while implementing machine learning, deep learning, and other AI techniques, such as natural language processing (NLP), computer vision, and embedded machine learning for building smart IoT systems. In addition to this, you’ll learn how to deploy models and improve their performance with ease. By the end of this book, you’ll be able to package and deploy end-to-end AI apps and apply best practice solutions to common IoT problems.
Table of Contents (11 chapters)

How it works...

Kafka is a streaming engine designed to handle large amounts of data. Data is ingested into Kafka and then sent to Spark, where the probability can be sent back to Kafka. In this example, we used Confluent Cloud and Databricks. These managed services can be found on all major cloud marketplaces.

In this recipe, we received real-time data from the engines. Then, we streamed that data in Spark and ran an inference on it. Once we'd received the results, we streamed them back into a separate Kafka topic. Using a Kafka topic and Kafka itself allows us to push that data into a database, data lakes, and microservices, all from a single data pipeline.