Book Image

Hands-On Unsupervised Learning with Python

By : Giuseppe Bonaccorso
Book Image

Hands-On Unsupervised Learning with Python

By: Giuseppe Bonaccorso

Overview of this book

Unsupervised learning is about making use of raw, untagged data and applying learning algorithms to it to help a machine predict its outcome. With this book, you will explore the concept of unsupervised learning to cluster large sets of data and analyze them repeatedly until the desired outcome is found using Python. This book starts with the key differences between supervised, unsupervised, and semi-supervised learning. You will be introduced to the best-used libraries and frameworks from the Python ecosystem and address unsupervised learning in both the machine learning and deep learning domains. You will explore various algorithms, techniques that are used to implement unsupervised learning in real-world use cases. You will learn a variety of unsupervised learning approaches, including randomized optimization, clustering, feature selection and transformation, and information theory. You will get hands-on experience with how neural networks can be employed in unsupervised scenarios. You will also explore the steps involved in building and training a GAN in order to process images. By the end of this book, you will have learned the art of unsupervised learning for different real-world challenges.
Table of Contents (12 chapters)

Chapter 6

  1. As the random variables are clearly independent, P(Tall, Rain) = P(Tall)P(Rain) = 0.75 • 0.2 = 0.15.
  2. One of the main drawbacks of histograms is that when the number of bins is too large, many of them start to be empty, because there are no samples in all of the value ranges. In this case, either the cardinality of X can be smaller than 1,000, or, even with more than 1,000 samples, the relative frequencies can be concentrated in a number of bins smaller than 1,000.
  3. The total number of samples is 75, and the bins have equal lengths. Hence, P(0 < x < 2) = 20/75 ≈ 0.27, P(2 < x < 4) = 30/75 = 0.4, and P(4 < x < 6) = 25/75 ≈ 0.33. As we don't have any samples, we can assume that P(x > 6) = 0; therefore, P(x > 2) = P(2 < x < 4) + P(4 < x < 6) ≈ 0.73. We have an immediate confirmation, considering that 0...