#### Overview of this book

Feature engineering, the process of transforming variables and creating features, albeit time-consuming, ensures that your machine learning models perform seamlessly. This second edition of Python Feature Engineering Cookbook will take the struggle out of feature engineering by showing you how to use open source Python libraries to accelerate the process via a plethora of practical, hands-on recipes. This updated edition begins by addressing fundamental data challenges such as missing data and categorical values, before moving on to strategies for dealing with skewed distributions and outliers. The concluding chapters show you how to develop new features from various types of data, including text, time series, and relational databases. With the help of numerous open source Python libraries, you'll learn how to implement each feature engineering method in a performant, reproducible, and elegant manner. By the end of this Python book, you will have the tools and expertise needed to confidently build end-to-end and reproducible feature engineering pipelines that can be deployed into production.
Preface
Chapter 3: Transforming Numerical Variables
Chapter 4: Performing Variable Discretization
Chapter 5: Working with Outliers
Chapter 6: Extracting Features from Date and Time Variables
Chapter 7: Performing Feature Scaling
Chapter 8: Creating New Features
Chapter 9: Extracting Features from Relational Data with Featuretools
Chapter 10: Creating Features from a Time Series with tsfresh
Chapter 11: Extracting Features from Text Variables
Index
Other Books You May Enjoy

# Performing equal-width discretization

Equal-width discretization is the simplest discretization method, which consists of dividing the range of observed values for a variable into k equally sized intervals, where k is supplied by the user. The interval width for the X variable is given by the following:

Then, if the values of the variable vary between 0 and 100, we can create five bins like this: width = (100-0) / 5 = 20; the bins will be 0–20, 20–40, 40–60, and 80–100. The first and final bins (0–20 and 80–100) can be expanded to accommodate values smaller than 0 or greater than 100, by extending the limits to minus and plus infinity.

In this recipe, we will carry out equal-width discretization using `pandas`, `scikit-learn`, and `Feature-engine`.

## How to do it...

First, let’s import the necessary Python libraries and get the dataset ready:

1. Import the Python libraries and the data:
`import numpy as...`