Book Image

Applied Unsupervised Learning with Python

By : Benjamin Johnston, Aaron Jones, Christopher Kruger
Book Image

Applied Unsupervised Learning with Python

By: Benjamin Johnston, Aaron Jones, Christopher Kruger

Overview of this book

Unsupervised learning is a useful and practical solution in situations where labeled data is not available. Applied Unsupervised Learning with Python guides you in learning the best practices for using unsupervised learning techniques in tandem with Python libraries and extracting meaningful information from unstructured data. The book begins by explaining how basic clustering works to find similar data points in a set. Once you are well-versed with the k-means algorithm and how it operates, you’ll learn what dimensionality reduction is and where to apply it. As you progress, you’ll learn various neural network techniques and how they can improve your model. While studying the applications of unsupervised learning, you will also understand how to mine topics that are trending on Twitter and Facebook and build a news recommendation engine for users. Finally, you will be able to put your knowledge to work through interesting activities such as performing a Market Basket Analysis and identifying relationships between different products. By the end of this book, you will have the skills you need to confidently build your own models using Python.
Table of Contents (12 chapters)
Applied Unsupervised Learning with Python
Preface

Autoencoders


Now that we are comfortable developing supervised neural network models in Keras, we can return our attention to unsupervised learning and the main subject of this chapter—autoencoders. Autoencoders are a specifically designed neural network architecture that aims to compress the input information into lower dimensional space in an efficient yet descriptive manner. Autoencoder networks can be decomposed into two individual sub-networks or stages: an encoding stage and a decoding stage. The first, or encoding, stage takes the input information and compresses it through a subsequent layer that has fewer units than the size of the input sample. The latter stage, that is, the decoding stage, then expands the compressed form of the image and aims to return the compressed data to its original form. As such, the inputs and desired outputs of the network are the same; the network takes, say, an image in the CIFAR-10 dataset and tries to return the same image. This network architecture...