We've seen in this chapter how to build a binary classifier based on Linear Regression and the logistic function. It's fast, small, and very effective, and can be trained using an incremental technique based on SGD. Moreover, with very little effort (the One-vs-Rest approach), the Binary Logistic Regressor can become multiclass.
In the next chapter, we will focus on how to prepare data: to obtain the maximum from the supervised algorithm, the input dataset must be carefully cleaned and normalized. In fact, real world datasets can have missing data, errors, and outliers, and variables can be categorical and with different ranges of values. Fortunately, some popular algorithms deal with these problems, transforming the dataset in the best way possible for the machine learning algorithm.