Book Image

Artificial Intelligence By Example - Second Edition

By : Denis Rothman
Book Image

Artificial Intelligence By Example - Second Edition

By: Denis Rothman

Overview of this book

AI has the potential to replicate humans in every field. Artificial Intelligence By Example, Second Edition serves as a starting point for you to understand how AI is built, with the help of intriguing and exciting examples. This book will make you an adaptive thinker and help you apply concepts to real-world scenarios. Using some of the most interesting AI examples, right from computer programs such as a simple chess engine to cognitive chatbots, you will learn how to tackle the machine you are competing with. You will study some of the most advanced machine learning models, understand how to apply AI to blockchain and Internet of Things (IoT), and develop emotional quotient in chatbots using neural networks such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs). This edition also has new examples for hybrid neural networks, combining reinforcement learning (RL) and deep learning (DL), chained algorithms, combining unsupervised learning with decision trees, random forests, combining DL and genetic algorithms, conversational user interfaces (CUI) for chatbots, neuromorphic computing, and quantum computing. By the end of this book, you will understand the fundamentals of AI and have worked through a number of examples that will help you develop your AI solutions.
Table of Contents (23 chapters)
21
Other Books You May Enjoy
22
Index

How to Use Decision Trees to Enhance K-Means Clustering

This chapter addresses two critical issues. First, we will explore how to implement k-means clustering with dataset volumes that exceed the capacity of the given algorithm. Second, we will implement decision trees that verify the results of an ML algorithm that surpasses human analytic capacity. We will also explore the use of random forests.

When facing such difficult problems, choosing the right model for the task often proves to be the most difficult task in ML. When we are given an unfamiliar set of features to represent, it can be a somewhat puzzling prospect. Then we have to get our hands dirty and try different models. An efficient estimator requires good datasets, which might change the course of the project.

This chapter builds on the k-means clustering (or KMC) program developed in Chapter 4, Optimizing Your Solutions with K-Means Clustering. The issue of large datasets is addressed. This exploration will...