Book Image

The Supervised Learning Workshop - Second Edition

By : Blaine Bateman, Ashish Ranjan Jha, Benjamin Johnston, Ishita Mathur
Book Image

The Supervised Learning Workshop - Second Edition

By: Blaine Bateman, Ashish Ranjan Jha, Benjamin Johnston, Ishita Mathur

Overview of this book

Would you like to understand how and why machine learning techniques and data analytics are spearheading enterprises globally? From analyzing bioinformatics to predicting climate change, machine learning plays an increasingly pivotal role in our society. Although the real-world applications may seem complex, this book simplifies supervised learning for beginners with a step-by-step interactive approach. Working with real-time datasets, you’ll learn how supervised learning, when used with Python, can produce efficient predictive models. Starting with the fundamentals of supervised learning, you’ll quickly move to understand how to automate manual tasks and the process of assessing date using Jupyter and Python libraries like pandas. Next, you’ll use data exploration and visualization techniques to develop powerful supervised learning models, before understanding how to distinguish variables and represent their relationships using scatter plots, heatmaps, and box plots. After using regression and classification models on real-time datasets to predict future outcomes, you’ll grasp advanced ensemble techniques such as boosting and random forests. Finally, you’ll learn the importance of model evaluation in supervised learning and study metrics to evaluate regression and classification tasks. By the end of this book, you’ll have the skills you need to work on your real-life supervised learning Python projects.
Table of Contents (9 chapters)

Classification Using K-Nearest Neighbors

Now that we are comfortable with creating multiclass classifiers using logistic regression and are getting reasonable performance with these models, we will turn our attention to another type of classifier: the K-nearest neighbors (KNN) classifier. KNN is a non-probabilistic, non-linear classifier. It does not predict the probability of a class. Also, as it does not learn any parameters, there is no linear combination of parameters and, thus, it is a non-linear model:

Figure 5.24: Visual representation of KNN

Figure 5.24 represents the workings of a KNN classifier. The two different symbols, X and O, represent data points belonging to two different classes. The solid circle at the center is the test point requiring classification, the inner dotted circle shows the classification process where k=3, while the outer dotted circle shows the classification process where k=5. What we mean here is that, if k=3, we only look...