Book Image

Applied Unsupervised Learning with R

By : Alok Malik, Bradford Tuckfield
Book Image

Applied Unsupervised Learning with R

By: Alok Malik, Bradford Tuckfield

Overview of this book

Starting with the basics, Applied Unsupervised Learning with R explains clustering methods, distribution analysis, data encoders, and features of R that enable you to understand your data better and get answers to your most pressing business questions. This book begins with the most important and commonly used method for unsupervised learning - clustering - and explains the three main clustering algorithms - k-means, divisive, and agglomerative. Following this, you'll study market basket analysis, kernel density estimation, principal component analysis, and anomaly detection. You'll be introduced to these methods using code written in R, with further instructions on how to work with, edit, and improve R code. To help you gain a practical understanding, the book also features useful tips on applying these methods to real business problems, including market segmentation and fraud detection. By working through interesting activities, you'll explore data encoders and latent variable models. By the end of this book, you will have a better understanding of different anomaly detection methods, such as outlier detection, Mahalanobis distances, and contextual and collective anomaly detection.
Table of Contents (9 chapters)

Introduction to Kernel Density Estimation


So far, we've studied parametric distributions in this chapter, but in real life, all distributions are either approximations of parametric distributions or don't resemble any parametric distributions at all. In such cases, we use a technique called Kernel Density Estimation, or KDE, to estimate their probability distributions.

KDE is used to estimate the probability density function of distributions or random variables with given finite points of that distribution using something called a kernel. This will be more clear to you after you continue further in the chapter.

KDE Algorithm

Contrary to what it might seem like given the heavy name, KDE is a very simple two-step process:

  1. Choosing a kernel

  2. Placing the kernel on data points and taking the sum of kernels

A kernel is a non-negative symmetric function that is used to model distributions. For example, in KDE, a normal distribution function is the most commonly used kernel function. Kernel functions...