Book Image

The Python Workshop - Second Edition

By : Corey Wade, Mario Corchero Jiménez, Andrew Bird, Dr. Lau Cher Han, Graham Lee
4.7 (3)
Book Image

The Python Workshop - Second Edition

4.7 (3)
By: Corey Wade, Mario Corchero Jiménez, Andrew Bird, Dr. Lau Cher Han, Graham Lee

Overview of this book

Python is among the most popular programming languages in the world. It’s ideal for beginners because it’s easy to read and write, and for developers, because it’s widely available with a strong support community, extensive documentation, and phenomenal libraries – both built-in and user-contributed. This project-based course has been designed by a team of expert authors to get you up and running with Python. You’ll work though engaging projects that’ll enable you to leverage your newfound Python skills efficiently in technical jobs, personal projects, and job interviews. The book will help you gain an edge in data science, web development, and software development, preparing you to tackle real-world challenges in Python and pursue advanced topics on your own. Throughout the chapters, each component has been explicitly designed to engage and stimulate different parts of the brain so that you can retain and apply what you learn in the practical context with maximum impact. By completing the course from start to finish, you’ll walk away feeling capable of tackling any real-world Python development problem.
Table of Contents (16 chapters)
13
Chapter 13: The Evolution of Python – Discovering New Python Features

K-nearest neighbors, decision trees, and random forests

Are there other ML algorithms, besides LinearRegression(), that are suitable for the Boston Housing dataset? Absolutely. There are many regressors in the scikit-learn library that may be used. Regressors are a class of ML algorithms that are suitable for continuous target values. In addition to linear regression, Ridge, and Lasso, we can try k-nearest neighbors, decision trees, and random forests. These models perform well on a wide range of datasets. Let’s try them out and analyze them individually.

K-nearest neighbors

The idea behind k-nearest neighbors (KNN) is straightforward. When choosing the output of a row with an unknown label, the prediction is the same as the output of its k-nearest neighbors, where k may be any whole number.

For instance, let’s say that k=3. Given an unknown label, we take n columns for this row and place them in n-dimensional space. Then, we look for the three closest points...