Book Image

Python Feature Engineering Cookbook - Second Edition

By : Soledad Galli
Book Image

Python Feature Engineering Cookbook - Second Edition

By: Soledad Galli

Overview of this book

Feature engineering, the process of transforming variables and creating features, albeit time-consuming, ensures that your machine learning models perform seamlessly. This second edition of Python Feature Engineering Cookbook will take the struggle out of feature engineering by showing you how to use open source Python libraries to accelerate the process via a plethora of practical, hands-on recipes. This updated edition begins by addressing fundamental data challenges such as missing data and categorical values, before moving on to strategies for dealing with skewed distributions and outliers. The concluding chapters show you how to develop new features from various types of data, including text, time series, and relational databases. With the help of numerous open source Python libraries, you'll learn how to implement each feature engineering method in a performant, reproducible, and elegant manner. By the end of this Python book, you will have the tools and expertise needed to confidently build end-to-end and reproducible feature engineering pipelines that can be deployed into production.
Table of Contents (14 chapters)

Performing polynomial expansion

Existing variables can be combined to create new insightful features. We discussed how to combine variables using mathematical and statistical operations in the previous two recipes, Combining features with mathematical functions and Combining features to reference variables. A combination of one feature with itself – that is, a polynomial combination of the same feature – can also return more predictive features. For example, in cases where the target has a quadratic relation with a variable, creating a second-degree polynomial of the feature allows us to use it in a linear model, as shown in the following figure:

Figure 8.4 – Change in the relationship between a target and a predictor variable after squaring the values of the latter

In the plot on the left, due to the quadratic relationship between the target, y, and the variable, x, there is a poor linear fit. Yet, in the plot on the right, we appreciate...