Book Image

Python Feature Engineering Cookbook - Second Edition

By : Soledad Galli
Book Image

Python Feature Engineering Cookbook - Second Edition

By: Soledad Galli

Overview of this book

Feature engineering, the process of transforming variables and creating features, albeit time-consuming, ensures that your machine learning models perform seamlessly. This second edition of Python Feature Engineering Cookbook will take the struggle out of feature engineering by showing you how to use open source Python libraries to accelerate the process via a plethora of practical, hands-on recipes. This updated edition begins by addressing fundamental data challenges such as missing data and categorical values, before moving on to strategies for dealing with skewed distributions and outliers. The concluding chapters show you how to develop new features from various types of data, including text, time series, and relational databases. With the help of numerous open source Python libraries, you'll learn how to implement each feature engineering method in a performant, reproducible, and elegant manner. By the end of this Python book, you will have the tools and expertise needed to confidently build end-to-end and reproducible feature engineering pipelines that can be deployed into production.
Table of Contents (14 chapters)

Encoding with the Weight of Evidence

The Weight of Evidence (WoE) was developed primarily for credit and financial industries to facilitate variable screening and exploratory analysis and to build more predictive linear models to evaluate the risk of loan defaults.

The WoE is computed from the basic odds ratio:

Here, positive and negative refer to the values of the target being 1 or 0, respectively. The proportion of positive cases per category is determined as the sum of positive cases per category group divided by the total positive cases in the training set, and the proportion of negative cases per category is determined as the sum of negative cases per category group divided by the total number of negative observations in the training set.

The WoE has the following characteristics:

  • WoE = 0 if p(positive) / p(negative) = 1; that is, if the outcome is random
  • WoE > 0 if p(positive) > p(negative)
  • WoE < 0 if p(negative) > p(positive)

This allows us to directly visualize the predictive power of the category in the variable: the higher the WoE, the more likely the event will occur. If the WoE is positive, the event is likely to occur:

Logistic regression models a binary response, Y, based on X predictor variables, assuming that there is a linear relationship between X and the log of odds of Y.

Here, log (p(Y=1)/p(Y=0)) is the log of odds. As you can see, the WoE encodes the categories in the same scale – that is, the log of odds – as the outcome of the logistic regression.

Therefore, by using WoE, the predictors are prepared and coded on the same scale, and the parameters in the logistic regression model – that is, the coefficients – can be directly compared.

In this recipe, we will perform WoE encoding using pandas and Feature-engine.

How to do it...

Let’s begin by making some imports and preparing the data:

  1. Import the required libraries and functions:
    import numpy as np
    import pandas as pd
    from sklearn.model_selection import train_test_split
  2. Let’s load the dataset and divide it into train and test sets:
    data = pd.read_csv("credit_approval_uci.csv")
    X_train, X_test, y_train, y_test = train_test_split(
        data.drop(labels=["target"], axis=1),
        data["target"],
        test_size=0.3,
        random_state=0,
    )
  3. Let’s get the inverse of the target values to be able to calculate the negative cases:
    neg_y_train = pd.Series(
        np.where(y_train == 1, 0, 1),
        index=y_train.index
    )
  4. Let’s determine the number of observations where the target variable takes a value of 1 or 0:
    total_pos = y_train.sum()
    total_neg = neg_y_train.sum()
  5. Now, let’s calculate the numerator and denominator of the WoE’s formula, which we discussed earlier in this recipe:
    pos = y_train.groupby(
        X_train["A1"]).sum() / total_pos
    neg = neg_y_train.groupby(
        X_train["A1"]).sum() / total_neg
  6. Now, let’s calculate the WoE per category:
    woe = np.log(pos/neg)

We can display the series with the category to WoE pairs by executing print(woe):

A1
Missing    0.203599
a          0.092373
b         -0.042410
dtype: float64
  1. Finally, let’s replace the categories of A1 with the WoE:
    X_train["A1"] = X_train["A1"].map(woe)
    X_test["A1"] = X_test["A1"].map(woe)

You can inspect the encoded variable by executing X_train["A1"].head().

Now, let’s perform WoE encoding using Feature-engine. First, we need to separate the data into train and test sets, as we did in step 2.

  1. Let’s import the encoder:
    from feature_engine.encoding import WoEEncoder
  2. Next, let’s set up the encoder so that we can encode three categorical variables:
    woe_enc = WoEEncoder(variables = ["A1", "A9", "A12"])

Tip

Feature-engine’s WoEEncoder() will return an error if p(0)=0 for any category because the division by 0 is not defined. To avoid this error, we can group infrequent categories, as we will discuss in the next recipe, Grouping rare or infrequent categories.

  1. Let’s fit the transformer to the train set so that it learns and stores the WoE of the different categories:
    woe_enc.fit(X_train, y_train)

Tip

We can display the dictionaries with the categories to WoE pairs by executing woe_enc.encoder_dict_.

  1. Finally, let’s encode the three categorical variables in the train and test sets:
    X_train_enc = woe_enc.transform(X_train)
    X_test_enc = woe_enc.transform(X_test)

Feature-engine returns pandas DataFrames containing the encoded categorical variables ready to use in machine learning models.

How it works...

First, with pandas sum(), we determined the total number of positive and negative cases. Next, using pandas groupby(), we determined the fraction of positive and negative cases per category. And with that, we calculated the WoE per category.

Finally, we automated the procedure with Feature-engine. We used WoEEncoder(), which learned the WoE per category with the fit() method, and then used transform(), which replaced the categories with the corresponding numbers.

See also

For an implementation of WoE with Category Encoders, visit this book’s GitHub repository.