Book Image

Applied Unsupervised Learning with Python

By : Benjamin Johnston, Aaron Jones, Christopher Kruger
Book Image

Applied Unsupervised Learning with Python

By: Benjamin Johnston, Aaron Jones, Christopher Kruger

Overview of this book

Unsupervised learning is a useful and practical solution in situations where labeled data is not available. Applied Unsupervised Learning with Python guides you in learning the best practices for using unsupervised learning techniques in tandem with Python libraries and extracting meaningful information from unstructured data. The book begins by explaining how basic clustering works to find similar data points in a set. Once you are well-versed with the k-means algorithm and how it operates, you’ll learn what dimensionality reduction is and where to apply it. As you progress, you’ll learn various neural network techniques and how they can improve your model. While studying the applications of unsupervised learning, you will also understand how to mine topics that are trending on Twitter and Facebook and build a news recommendation engine for users. Finally, you will be able to put your knowledge to work through interesting activities such as performing a Market Basket Analysis and identifying relationships between different products. By the end of this book, you will have the skills you need to confidently build your own models using Python.
Table of Contents (12 chapters)
Applied Unsupervised Learning with Python
Preface

Interpreting t-SNE Plots


Now that we are able to use t-distributed SNE to visualize high-dimensional data, it is important to understand the limitations of such plots and what aspects are important in interpreting and generating them. In this section of the chapter, we will highlight some of the important features of t-SNE and demonstrate how care should be taken when using the visualization technique.

Perplexity

As described in the introduction to t-SNE, the perplexity values specify the number of nearest neighbors to be used in computing the conditional probability. The selection of this value can make a significant difference to the end result; with a low value of perplexity, local variations in the data dominate because a small number of samples are used in the calculation. Conversely, a large value of perplexity considers more global variations as many more samples are used in the calculation. Typically, it is worth trying a range of different values to investigate the effect of perplexity...