We now look at a practical example, containing what we've seen so far in this chapter.
Our dataset is an artificially created one, composed of 10,000 observations and 10 features, all of them informative (that is, no redundant ones) and labels "0" and "1" (binary classification). Having all the informative features is not an unrealistic hypothesis in machine learning, since usually the feature selection or feature reduction operation selects non-related features.
In: X, y = make_classification(n_samples=10000, n_features=10, n_informative=10, n_redundant=0, random_state=101)
Now, we'll show you how to use different libraries, and different modules, to perform the classification task, using logistic regression. We won't focus here on how to measure the performance, but on how the coefficients can compose the model (what we've named in the previous chapters).
As a first step, we will use Statsmodel. After having loaded the right...