#### Overview of this book

Unsupervised learning is a useful and practical solution in situations where labeled data is not available. Applied Unsupervised Learning with Python guides you in learning the best practices for using unsupervised learning techniques in tandem with Python libraries and extracting meaningful information from unstructured data. The book begins by explaining how basic clustering works to find similar data points in a set. Once you are well-versed with the k-means algorithm and how it operates, you’ll learn what dimensionality reduction is and where to apply it. As you progress, you’ll learn various neural network techniques and how they can improve your model. While studying the applications of unsupervised learning, you will also understand how to mine topics that are trending on Twitter and Facebook and build a news recommendation engine for users. Finally, you will be able to put your knowledge to work through interesting activities such as performing a Market Basket Analysis and identifying relationships between different products. By the end of this book, you will have the skills you need to confidently build your own models using Python.
Applied Unsupervised Learning with Python
Preface
Free Chapter
Introduction to Clustering
Hierarchical Clustering
Neighborhood Approaches and DBSCAN
Dimension Reduction and PCA
Autoencoders
t-Distributed Stochastic Neighbor Embedding (t-SNE)
Topic Modeling
Hotspot Analysis

## Chapter 2: Hierarchical Clustering

Solution:

1. Visualize the x dataset that we created in Exercise 7, Building a Hierarchy:

```from scipy.cluster.hierarchy import linkage, dendrogram, fcluster
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
%matplotlib inline
# Generate a random cluster dataset to experiment on. X = coordinate points, y = cluster labels (not needed)
X, y = make_blobs(n_samples=1000, centers=8, n_features=2, random_state=800)
# Visualize the data
plt.scatter(X[:,0], X[:,1])
plt.show()```

The output is as follows:

Figure 2.20: A scatter plot of the generated cluster dataset

2. Create a list with all the possible linkage method hyperparameters:

`methods = ['centroid', 'single', 'complete', 'average', 'weighted']`
3. Loop through each of the methods in the list that you just created and display the effect that they have on the same dataset:

```for method in methods:
clusters = fcluster(distances, 3, criterion="distance")
plt.scatter(X[:,0], X[:,1], c=clusters, cmap='tab20b')
plt.show()```

The output is as follows:

Figure 2.21: A scatter plot for all the methods

As you can see from the preceding plots, by simply changing the linkage criteria, you can dramatically change the efficacy of your clustering. In this dataset, centroid and average linkage work best at finding discrete clusters that make sense. This is clear from the fact that we generated a dataset of eight clusters, and centroid and average linkage are the only ones that show the clusters that are represented using eight different colors. The other linkage types fall short – most noticeably, single linkage.

Analysis:

### Activity 3: Comparing k-means with Hierarchical Clustering

Solution:

1. Import the necessary packages from scikit-learn (KMeans, AgglomerativeClustering, and silhouette_score), as follows:

```from sklearn.cluster import KMeans
from sklearn.cluster import AgglomerativeClustering
from sklearn.metrics import silhouette_score
import pandas as pd
import matplotlib.pyplot as plt```
2. Read the wine dataset into the pandas DataFrame and print a small sample:

```wine_df = pd.read_csv("wine_data.csv")

The output is as follows:

Figure 2.22: The output of the wine dataset

3. Visualize the wine dataset to understand the data structure:

```plt.scatter(wine_df.values[:,0], wine_df.values[:,1])
plt.title("Wine Dataset")
plt.ylabel("Proline")
plt.show()```

The output is as follows:

Figure 2.23: A plot of raw wine data

4. Use the sklearn implementation of k-means on the wine dataset, knowing that there are three wine types:

```km = KMeans(3)
km_clusters = km.fit_predict(wine_df)```
5. Use the sklearn implementation of hierarchical clustering on the wine dataset:

```ac = AgglomerativeClustering(3, linkage='average')
ac_clusters = ac.fit_predict(wine_df)```
6. Plot the predicted clusters from k-means, as follows:

```plt.scatter(wine_df.values[:,0], wine_df.values[:,1], c=km_clusters)
plt.title("Wine Clusters from Agglomerative Clustering")
plt.ylabel("Proline")
plt.show()```

The output is as follows:

Figure 2.24: A plot of clusters from k-means clustering

7. Plot the predicted clusters from hierarchical clustering, as follows:

```plt.scatter(wine_df.values[:,0], wine_df.values[:,1], c=ac_clusters)
plt.title("Wine Clusters from Agglomerative Clustering")
plt.ylabel("Proline")
plt.show()```

The output is as follows:

Figure 2.25: A plot of clusters from agglomerative clustering

8. Compare the silhouette score of each clustering method:

```print("Silhouette Scores for Wine Dataset:\n")
print("k-means Clustering: ", silhouette_score(X[:,11:13], km_clusters))
print("Agg Clustering: ", silhouette_score(X[:,11:13], ac_clusters))```

The output will be as follows:

Figure 2.26: Silhouette scores for the wine dataset

As you can see from the preceding silhouette metric, agglomerative clustering narrowly beats k-means clustering when it comes to separating the clusters by mean intra-cluster distance. This is not the case for every version of agglomerative clustering, however. Instead, try different linkage types and examine how the silhouette score and clustering changes between each!