Book Image

Data Science Projects with Python - Second Edition

By : Stephen Klosterman
Book Image

Data Science Projects with Python - Second Edition

By: Stephen Klosterman

Overview of this book

If data is the new oil, then machine learning is the drill. As companies gain access to ever-increasing quantities of raw data, the ability to deliver state-of-the-art predictive models that support business decision-making becomes more and more valuable. In this book, you’ll work on an end-to-end project based around a realistic data set and split up into bite-sized practical exercises. This creates a case-study approach that simulates the working conditions you’ll experience in real-world data science projects. You’ll learn how to use key Python packages, including pandas, Matplotlib, and scikit-learn, and master the process of data exploration and data processing, before moving on to fitting, evaluating, and tuning algorithms such as regularized logistic regression and random forest. Now in its second edition, this book will take you through the end-to-end process of exploring data and delivering machine learning models. Updated for 2021, this edition includes brand new content on XGBoost, SHAP values, algorithmic fairness, and the ethical concerns of deploying a model in the real world. By the end of this data science book, you’ll have the skills, understanding, and confidence to build your own machine learning models and gain insights from real data.
Table of Contents (9 chapters)
Preface

4. The Bias-Variance Trade-Off

Activity 4.01: Cross-Validation and Feature Engineering with the Case Study Data

Solution:

  1. Select out the features from the DataFrame of the case study data.

    You can use the list of feature names that we've already created in this chapter, but be sure not to include the response variable, which would be a very good (but entirely inappropriate) feature:

    features = features_response[:-1]
    X = df[features].values
  2. Make a training/test split using a random seed of 24:
    X_train, X_test, y_train, y_test = \
    train_test_split(X, df['default payment next month'].values,
                     test_size=0.2, random_state=24)

    We'll use this going forward and reserve this test data as the unseen test set. By specifying the random seed, we can easily create separate notebooks with other modeling approaches using the same training data.

  3. Instantiate MinMaxScaler...