Book Image

Python Feature Engineering Cookbook - Second Edition

By : Soledad Galli
Book Image

Python Feature Engineering Cookbook - Second Edition

By: Soledad Galli

Overview of this book

Feature engineering, the process of transforming variables and creating features, albeit time-consuming, ensures that your machine learning models perform seamlessly. This second edition of Python Feature Engineering Cookbook will take the struggle out of feature engineering by showing you how to use open source Python libraries to accelerate the process via a plethora of practical, hands-on recipes. This updated edition begins by addressing fundamental data challenges such as missing data and categorical values, before moving on to strategies for dealing with skewed distributions and outliers. The concluding chapters show you how to develop new features from various types of data, including text, time series, and relational databases. With the help of numerous open source Python libraries, you'll learn how to implement each feature engineering method in a performant, reproducible, and elegant manner. By the end of this Python book, you will have the tools and expertise needed to confidently build end-to-end and reproducible feature engineering pipelines that can be deployed into production.
Table of Contents (14 chapters)

Using decision trees for discretization

Decision tree methods discretize continuous attributes during the learning process. A decision tree evaluates all possible values of a feature and selects the cut-point that maximizes the class separation, by utilizing a performance metric such as the entropy or Gini impurity. Then, it repeats the process for each node of the first data separation, along with each node of the subsequent data splits, until a certain stopping criterion has been reached. Therefore, by design, decision trees can find the set of cut-points that partition a variable into intervals with good class coherence.

Discretization with decision trees consists of using a decision tree to identify the optimal partitions for each continuous variable. In the Feature-engine implementation of this method, the decision tree is fit using the variable to discretize, and the target. After fitting, the decision tree is able to assign each observation to one of the N end leaves, generating...