Book Image

Practical Discrete Mathematics

By : Ryan T. White, Archana Tikayat Ray
Book Image

Practical Discrete Mathematics

By: Ryan T. White, Archana Tikayat Ray

Overview of this book

Discrete mathematics deals with studying countable, distinct elements, and its principles are widely used in building algorithms for computer science and data science. The knowledge of discrete math concepts will help you understand the algorithms, binary, and general mathematics that sit at the core of data-driven tasks. Practical Discrete Mathematics is a comprehensive introduction for those who are new to the mathematics of countable objects. This book will help you get up to speed with using discrete math principles to take your computer science skills to a more advanced level. As you learn the language of discrete mathematics, you’ll also cover methods crucial to studying and describing computer science and machine learning objects and algorithms. The chapters that follow will guide you through how memory and CPUs work. In addition to this, you’ll understand how to analyze data for useful patterns, before finally exploring how to apply math concepts in network routing, web searching, and data science. By the end of this book, you’ll have a deeper understanding of discrete math and its applications in computer science, and be ready to work on real-world algorithm development and machine learning.
Table of Contents (17 chapters)
1
Part I – Basic Concepts of Discrete Math
7
Part II – Implementing Discrete Mathematics in Data and Computer Science
12
Part III – Real-World Applications of Discrete Mathematics

The scikit-learn implementation of PCA

In this section, we will apply PCA to the pizza.csv dataset (which we explored in the first section of this chapter) using the scikit-learn library's PCA class.

As discussed in the previous section, there are two ways of choosing how many principal components to use, and the choice depends on the goal that you are trying to achieve – whether to reduce the dimensionality to plot something in 2-dimensional/3-dimensional space or keep enough principal components to achieve a certain proportion of variance.

First, we will implement the method where we can select the number of principal components we want to keep. We will reduce the 7-dimensional pizza dataset to two principal components so that we can visualize how the different pizzas produced by 10 different companies are different from each other when it comes to their nutritional content in a 2D plot instead of worrying about comparing and visualizing data in higher dimensions...