In this chapter, we looked at features and transformers and how they can be used in the data mining pipeline. We discussed what makes a good feature and how to algorithmically choose good features from a standard set. However, creating good features is more art than science and often requires domain knowledge and experience.
We then created our own transformer using an interface that allows us to use it in scikit-learn's helper functions. We will be creating more transformers in later chapters so that we can perform effective testing using existing functions.
To take the lessons learned in this chapter further, I recommend signing up to the online data mining competition website Kaggle.com and trying some of the competitions. Their recommended starting place is the Titanic dataset, which allows you to practice the feature creation aspects of this chapter. Many of the features are not numerical, requiring you to convert them to numerical features before applying a data mining algorithm...