Book Image

Data Science Projects with Python

By : Stephen Klosterman
Book Image

Data Science Projects with Python

By: Stephen Klosterman

Overview of this book

Data Science Projects with Python is designed to give you practical guidance on industry-standard data analysis and machine learning tools, by applying them to realistic data problems. You will learn how to use pandas and Matplotlib to critically examine datasets with summary statistics and graphs, and extract the insights you seek to derive. You will build your knowledge as you prepare data using the scikit-learn package and feed it to machine learning algorithms such as regularized logistic regression and random forest. You’ll discover how to tune algorithms to provide the most accurate predictions on new and unseen data. As you progress, you’ll gain insights into the working and output of these algorithms, building your understanding of both the predictive capabilities of the models and why they make these predictions. By then end of this book, you will have the necessary skills to confidently use machine learning algorithms to perform detailed data analysis and extract meaningful insights from unstructured data.
Table of Contents (9 chapters)
Data Science Projects with Python
Preface

Chapter 6: Imputation of Missing Data, Financial Analysis, and Delivery to Client


Activity 6: Deriving Financial Insights

  1. Using the testing set, calculate the cost of all defaults if there were no counseling program.

    Use this code for the calculation:

    cost_of_defaults = sum(y_test_all) * savings_per_default
    cost_of_defaults

    The output should be:

    Figure 6.59: Cost of all defaults assuming no counseling

  2. Calculate by what percent the cost of defaults can be decreased by the counseling program.

    The potential decrease in cost of default is the greatest possible net savings of the counseling program, divided by the cost of all defaults in the absence of a program:

    net_savings[max_savings_ix]/cost_of_defaults

    The output should be:

    Figure 6.60: Fractional decrease in cost of defaults that could result from a counseling program

    Results indicate that we can decrease the cost of defaults by 23% using a counseling program, guided by predictive modeling.

  3. Calculate the net savings per account at the optimal threshold.

    Use this code for the calculation:

    net_savings[max_savings_ix]/len(y_test_all)

    The output should be:

    Figure 6.61: Net savings per account possible with the counseling program

    Results like these help the client scale the potential amount of savings they could create with the counseling program, to as many accounts as they serve.

  4. Plot the net savings per account against the cost of counseling per account for each threshold.

    Create the plot with this code:

    plt.plot(cost_of_all_counselings/len(y_test_all), net_savings/len(y_test_all))
    plt.xlabel('Upfront investment: cost of counselings per account (NT$)')
    plt.ylabel('Net savings per account (NT$)')

    The resulting plot should appear like this:

    Figure 6.62: The initial cost of the counseling program needed to achieve a given amount of savings

    This indicates how much money the client needs to budget to the counseling program in a given month, to achieve a given amount of savings. It looks like the greatest benefit can be created by budgeting up to about NT$2000 per account. After this, net savings are relatively flat, and then decline. The client may not actually be able to budget this much to the program. However, this graphic gives them evidence to argue for a larger budget if they need to.

    This result corresponds to our graphic from the previous exercise. Although we've shown the optimal threshold is 0.2, it may be fine for the client to use a higher threshold up to about 0.25, thus making fewer positive predictions, offering counseling to fewer account holders, and having a smaller upfront program cost. Figure 6.62 shows how this plays out in terms of cost and net savings per account.

  5. Plot the fraction of accounts predicted as positive (this is called the "flag rate") at each threshold.

    Use this code to plot the flag rate against the threshold:

    plt.plot(thresholds, n_pos_pred/len(y_test_all))
    plt.ylabel('Flag rate')
    plt.xlabel('Threshold')

    The plot should appear as follows:

    Figure 6.63: Flag rate against threshold for the credit counseling program

    This plot shows the fraction of people who will be predicted to default, and therefore recommended outreach at each threshold. It appears that at the optimal threshold of 0.2, only about 30% of accounts will be flagged for counseling. This shows how using a model to prioritize accounts for counseling can help focus on the right accounts and reduce wasted resources. Higher thresholds, which may result in nearly-optimal savings up to a threshold of about 0.25 as shown in Figure 6.36, (Exercise 25, Characterizing Costs and Savings), result in lower flag rates.

  6. Plot a precision-recall curve for the testing data using the following code:

    plt.plot(n_true_pos/sum(y_test_all), np.divide(n_true_pos, n_pos_pred))
    plt.xlabel('Recall')
    plt.ylabel('Precision')

    The plot should look like this:

    Figure 6.64: Precision-recall curve

    Figure 6.64 shows that in order to start getting a true positive rate (that is, recall) much above 0, we need to accept a precision of about 0.75 or lower. So, it appears there is room for improvement in our model. While this would not necessarily be communicated to the client, it shows that by using more advanced modeling techniques, or a richer set of features, model performance could be improved.

    Precision and recall have a direct link to the cost and savings of the program: the more precise our predictions are, the less money we are wasting on counseling due to incorrect model predictions. And, the higher the recall, the more savings we can create by successfully identifying accounts that would default. Compare the code in this step to the code used to calculate cost and savings in the previous exercise to see this. This links the financial analysis to machine learning metrics we have examined earlier in the case study.

    To see the connection of precision and recall with the threshold used to define positive and negative predictions, it can be instructive to plot them separately.

  7. Plot precision and recall separately on the y-axis against threshold on the x-axis.

    Use this code to produce the plot:

    plt.plot(thresholds, np.divide(n_true_pos, n_pos_pred), label='Precision')
    plt.plot(thresholds, n_true_pos/sum(y_test_all), label='Recall')
    plt.xlabel('Threshold')
    plt.legend()

    The plot should appear as follows:

    Figure 6.65: Precision and recall plotted separately against threshold

    This plot sheds some light on why the optimal threshold turned out to be 0.2. While the optimal threshold also depends on the financial analysis of costs and savings, we can see here that the steepest part of the initial increase in precision, which represents the correctness of positive predictions and is therefore a measure of how cost-effective the model-guided counseling can be, happens up to a threshold of about 0.2.