In this chapter, we introduced KNN, a simple but powerful model that can be used in classification and regression tasks. KNN is a lazy learner and a non-parametric model; it does not estimate the values of a fixed number of parameters from the training data. Instead, it stores all the training instances and uses the instances that are nearest the test instance to predict the value of the response variable. We worked through toy classification and regression problems. We also introduced scikit-learn's transformer interface; we used LabelBinarizer
to transform string labels to binary labels and StandardScaler
to standardize our features.
In the next chapter, we will discuss feature extraction techniques for categorical variables, text, and images; these will allow us to apply KNN to more problems in the real world.