#### Overview of this book

Starting with the basics, Applied Unsupervised Learning with R explains clustering methods, distribution analysis, data encoders, and features of R that enable you to understand your data better and get answers to your most pressing business questions. This book begins with the most important and commonly used method for unsupervised learning - clustering - and explains the three main clustering algorithms - k-means, divisive, and agglomerative. Following this, you'll study market basket analysis, kernel density estimation, principal component analysis, and anomaly detection. You'll be introduced to these methods using code written in R, with further instructions on how to work with, edit, and improve R code. To help you gain a practical understanding, the book also features useful tips on applying these methods to real business problems, including market segmentation and fraud detection. By working through interesting activities, you'll explore data encoders and latent variable models. By the end of this book, you will have a better understanding of different anomaly detection methods, such as outlier detection, Mahalanobis distances, and contextual and collective anomaly detection.
Applied Unsupervised Learning with R
Preface
Free Chapter
Introduction to Clustering Methods
Probability Distributions
Dimension Reduction
Data Comparison Methods
Anomaly Detection

## Chapter 3: Probability Distributions

### Activity 8: Finding the Standard Distribution Closest to the Distribution of Variables of the Iris Dataset

Solution:

1. Load the Iris dataset into the df variable:

`df<-iris`
2. Select rows corresponding to the setosa species only:

`df=df[df\$Species=='setosa',]`
3. Import the kdensity library:

`library(kdensity)`
4. Calculate and plot the KDE from the kdensity function for sepal length:

```dist <- kdensity(df\$Sepal.Length)
plot(dist)```

The output is as follows:

Figure 3.36 Plot of the KDE for sepal length

This distribution is closest to the normal distribution, which we studied in the previous section. Here, the mean and median are both around 5.

5. Calculate and plot the KDE from the kdensity function for sepal width:

```dist <- kdensity(df\$Sepal.Width)
plot(dist)```

The output is as follows:

Figure 3.37 Plot of the KDE for sepal width

This distribution is also closest to normal distribution. We can formalize this similarity with a Kolmogorov-Smirnov test.

### Activity 9: Calculating the CDF and Performing the Kolmogorov-Simonov Test with the Normal Distribution

Solution:

1. Load the Iris dataset into the df variable:

`df<-iris`
2. Keep rows with the setosa species only:

`df=df[df\$Species=='setosa',]`
3. Calculate the mean and standard deviation of the sepal length column of df:

```sdev<-sd(df\$Sepal.Length)
mn<-mean(df\$Sepal.Length)```
4. Generate a new distribution with the standard deviation and mean of the sepal length column:

`xnorm<-rnorm(100,mean=mn,sd=sdev)`
5. Plot the CDF of both xnorm and the sepal length column:

```plot(ecdf(xnorm),col='blue')

The output is as follows:

Figure 3.38: The CDF of xnorm and sepal length

The samples look very close to each other in the distribution. Let's see, in the next test, whether the sepal length sample belongs to the normal distribution or not.

6. Perform the Kolmogorov-Smirnov test on the two samples, as follows:

`ks.test(xnorm,df\$Sepal.Length)`

The output is as follows:

```    Two-sample Kolmogorov-Smirnov test
data: xnorm and df\$Sepal.Length
D = 0.14, p-value = 0.5307
alternative hypothesis: two-sided```

Here, p-value is very high and the D value is low, so we can assume that the distribution of sepal length is closely approximated by the normal distribution.

7. Repeat the same steps for the sepal width column of df:

```sdev<-sd(df\$Sepal.Width)
mn<-mean(df\$Sepal.Width)
xnorm<-rnorm(100,mean=mn,sd=sdev)
plot(ecdf(xnorm),col='blue')

The output is as follows:

Figure 3.39: CDF of xnorm and sepal width

8. Perform the Kolmogorov-Smirnov test as follows:

`ks.test(xnorm,df\$Sepal.Length)`

The output is as follows:

```    Two-sample Kolmogorov-Smirnov test

data: xnorm and df\$Sepal.Width
D = 0.12, p-value = 0.7232
alternative hypothesis: two-sided```

Here, also, the sample distribution of sepal width is closely approximated by the normal distribution.