Book Image

The Unsupervised Learning Workshop

By : Aaron Jones, Christopher Kruger, Benjamin Johnston
Book Image

The Unsupervised Learning Workshop

By: Aaron Jones, Christopher Kruger, Benjamin Johnston

Overview of this book

Do you find it difficult to understand how popular companies like WhatsApp and Amazon find valuable insights from large amounts of unorganized data? The Unsupervised Learning Workshop will give you the confidence to deal with cluttered and unlabeled datasets, using unsupervised algorithms in an easy and interactive manner. The book starts by introducing the most popular clustering algorithms of unsupervised learning. You'll find out how hierarchical clustering differs from k-means, along with understanding how to apply DBSCAN to highly complex and noisy data. Moving ahead, you'll use autoencoders for efficient data encoding. As you progress, you’ll use t-SNE models to extract high-dimensional information into a lower dimension for better visualization, in addition to working with topic modeling for implementing natural language processing (NLP). In later chapters, you’ll find key relationships between customers and businesses using Market Basket Analysis, before going on to use Hotspot Analysis for estimating the population density of an area. By the end of this book, you’ll be equipped with the skills you need to apply unsupervised algorithms on cluttered datasets to find useful patterns and insights.
Table of Contents (11 chapters)
Preface

Stochastic Neighbor Embedding (SNE)

SNE is one of a number of different methods that fall within the category of manifold learning, which aims to describe high-dimensional spaces within low-dimensional manifolds or bounded areas. At first thought, this seems like an impossible task; how can we reasonably represent data in two dimensions if we have a dataset with at least 30 features? As we work through the derivation of SNE, it is hoped that you will see how this is possible. Don't worry – we will not be covering the mathematical details of this process in great depth as it is outside of the scope of this chapter. Constructing an SNE can be divided into the following steps:

  1. Convert the distances between datapoints in the high-dimensional space into conditional probabilities. Say we had two points, xi and xj, in a high-dimensional space and we wanted to determine the probability (pi|j) that xj would be picked as a neighbor of xi. To define this probability, we use...