Book Image

Artificial Intelligence for IoT Cookbook

By : Michael Roshak
Book Image

Artificial Intelligence for IoT Cookbook

By: Michael Roshak

Overview of this book

Artificial intelligence (AI) is rapidly finding practical applications across a wide variety of industry verticals, and the Internet of Things (IoT) is one of them. Developers are looking for ways to make IoT devices smarter and to make users’ lives easier. With this AI cookbook, you’ll be able to implement smart analytics using IoT data to gain insights, predict outcomes, and make informed decisions, along with covering advanced AI techniques that facilitate analytics and learning in various IoT applications. Using a recipe-based approach, the book will take you through essential processes such as data collection, data analysis, modeling, statistics and monitoring, and deployment. You’ll use real-life datasets from smart homes, industrial IoT, and smart devices to train and evaluate simple to complex models and make predictions using trained models. Later chapters will take you through the key challenges faced while implementing machine learning, deep learning, and other AI techniques, such as natural language processing (NLP), computer vision, and embedded machine learning for building smart IoT systems. In addition to this, you’ll learn how to deploy models and improve their performance with ease. By the end of this book, you’ll be able to package and deploy end-to-end AI apps and apply best practice solutions to common IoT problems.
Table of Contents (11 chapters)

How to do it...

Follow these steps to complete this recipe:

  1. Import the required libraries: 
import pandas as pd

from sklearn import neighbors, metrics
from sklearn.metrics import roc_auc_score, classification_report,\
precision_recall_fscore_support,confusion_matrix,precision_score, \
roc_curve,precision_recall_fscore_support as score
from sklearn.model_selection import train_test_split

import statsmodels.api as sm
import statsmodels.formula.api as smf
  1. Import the data:
df = spark.sql("select * from BreastCancer")
pdf = df.toPandas()
  1. Split the data:
X = pdf
y = pdf['diagnosis']

X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=40)
  1. Create the formula:
cols = pdf.columns.drop('diagnosis')
formula = 'diagnosis ~ ' + ' + '.join(cols)
  1.  Train the model:
model = smf.glm(formula=formula, data=X_train, 
family=sm.families.Binomial())
logistic_fit = model.fit()
  1. Test our model:
predictions...