Book Image

Data Science Projects with Python

By : Stephen Klosterman
Book Image

Data Science Projects with Python

By: Stephen Klosterman

Overview of this book

Data Science Projects with Python is designed to give you practical guidance on industry-standard data analysis and machine learning tools, by applying them to realistic data problems. You will learn how to use pandas and Matplotlib to critically examine datasets with summary statistics and graphs, and extract the insights you seek to derive. You will build your knowledge as you prepare data using the scikit-learn package and feed it to machine learning algorithms such as regularized logistic regression and random forest. You’ll discover how to tune algorithms to provide the most accurate predictions on new and unseen data. As you progress, you’ll gain insights into the working and output of these algorithms, building your understanding of both the predictive capabilities of the models and why they make these predictions. By then end of this book, you will have the necessary skills to confidently use machine learning algorithms to perform detailed data analysis and extract meaningful insights from unstructured data.
Table of Contents (9 chapters)
Data Science Projects with Python
Preface

Chapter 5: Decision Trees and Random Forests


Activity 5: Cross-Validation Grid Search with Random Forest

  1. Create a dictionary representing the grid for the max_depth and n_estimators hyperparameters that will be searched in. Include depths of 3, 6, 9, and 12, and 10, 50, 100, and 200 trees. Leave the other hyperparameters at their defaults. Create the dictionary using this code:

    rf_params = {'max_depth':[3, 6, 9, 12],
                 'n_estimators':[10, 50, 100, 200]}

    Note

    There are many other possible hyperparameters to search for. In particular, the scikit-learn documentation for random forest indicates the following:

    "The main parameters to adjust when using these methods is n_estimators and max_features" and that "Empirical good default values are … max_features=sqrt(n_features) for classification tasks."

    Source: https://scikit-learn.org/stable/modules/ensemble.html#parameters

    Note

    For the purposes of this book, we will use max_features='auto' (which is equal to sqrt(n_features)) and limit our exploration to max_depth and n_estimators for the sake of a shorter runtime. In a real-world situation, you should explore other hyperparameters according to how much computational time you can afford. Remember that in order to search in especially large parameter spaces, you can use RandomizedSearchCV to avoid exhaustively calculating metrics for every combination of hyperparameters in the grid that you specify.

  2. Instantiate a GridSearchCV object using the same options that we have previously in this chapter, but with the dictionary of hyperparameters created in step 1 here. Set verbose=2 to see the output for each fit performed. You can reuse the same random forest model object rf that we have been using. Instantiate the class using this code:

    cv_rf = GridSearchCV(rf, param_grid=rf_params, scoring='roc_auc', fit_params=None,
                      n_jobs=None, iid=False, refit=True, cv=4, verbose=2,
                      pre_dispatch=None, error_score=np.nan, return_train_score=True)
  3. Fit the GridSearchCV object on the training data. Perform the grid search using this code:

    cv_rf.fit(X_train, y_train)

    Because we chose the verbose=2 option, you will see a relatively large amount of output in the notebook. There will be output for each combination of hyperparameters and for each fold, as it is fit and tested. Here are the first few lines of output:

    Figure 6.56: The verbose output from cross-validation

    While it's not necessary to see all this output for shorter cross-validation procedures, for longer ones it can be reassuring to see that the cross-validation is working and to give you an idea of how long the fits are taking for various combinations of hyperparameters. If things are taking too long, you may want to interrupt the kernel by pushing the stop button (square) at the top of the notebook and choose hyperparameters that will take less time to run or use a more limited set of hyperparameters.

    When this is all done, you should see the following output:

    Figure 6.57: the cross-validation output upon completion

    This cross-validation job took about two minutes to run. As your jobs grow, you may wish to explore parallel processing with the n_jobs parameter to see whether it's possible to speed up the search. Using n_jobs=-1 and omitting the pre_dispatch option so the default is used, we were able to achieve a run-time of 52 seconds, compared to a 2-minute runtime with serial processing as shown here. However, with parallel processing, you won't be able to see the output of each individual model fitting operation as shown in Figure 5.30.

  4. Put the results of the grid search in a pandas DataFrame. Use this code to put the results in a DataFrame:

    cv_rf_results_df = pd.DataFrame(cv_rf.cv_results_)
  5. Create a pcolormesh visualization of the mean testing score for each combination of hyperparameters. Here is the code to create a mesh graph of cross-validation results. It's similar to the example graph that we previously created, but with annotation that is specific to the cross-validation we performed here:

    xx_rf, yy_rf = np.meshgrid(range(5), range(5))
    cm_rf = plt.cm.jet
    ax_rf = plt.axes()
    pcolor_graph = ax_rf.pcolormesh(xx_rf, yy_rf, cv_rf_results_df['mean_test_score'].values.reshape((4,4)), cmap=cm_rf)
    plt.colorbar(pcolor_graph, label='Average testing ROC AUC')
    ax_rf.set_aspect('equal')
    ax_rf.set_xticks([0.5, 1.5, 2.5, 3.5])
    ax_rf.set_yticks([0.5, 1.5, 2.5, 3.5])
    ax_rf.set_xticklabels([str(tick_label) for tick_label in rf_params['n_estimators']])
    ax_rf.set_yticklabels([str(tick_label) for tick_label in rf_params['max_depth']])
    ax_rf.set_xlabel('Number of trees')
    ax_rf.set_ylabel('Maximum depth')

    The main changes from our previous example are that instead of plotting the integers from 1 to 16, we're plotting the mean testing scores that we've retrieved and reshaped with cv_rf_results_df['mean_test_score'].values.reshape((4,4)). The other new things here are that we are using list comprehensions to create lists of strings for tick labels, based on the numerical values of hyperparameters in the grid. We access them from the dictionary that we defined, and then convert them individually to the str (string) data type within the list comprehension, for example: ax_rf.set_xticklabels([str(tick_label) for tick_label in rf_params['n_estimators']]). We have already set the tick locations to the places that we want the ticks using set_xticks . The graph should appear as follows:

    Figure 6.58: Results of cross-validation of a random forest over a grid with two hyperparameters

  6. Conclude which set of hyperparameters to use.

    What can we conclude from our grid search? There certainly seems to be an advantage to using trees with a depth of more than three. Of the parameter combinations that we tried, max_depth=9 with 200 trees yields the best average testing score, which you can look up in the DataFrame and find is ROC AUC = 0.776. This is the best model we've found so far.

    In a real-world scenario, we'd likely do a more thorough search. Some good next steps would be to try a larger number of trees and not spend any more time with n_estimators < 200, since we know that we need at least 200 trees to get the best performance. You may search a more granular space of max_depth instead of jumping by 3's as we've done here and try a couple of other hyperparameters such as max_features. However, for the book, we'll assume that we've done a more thorough search like this and concluded that max_depth=9 and n_estimators=200 is the optimal set of hyperparameters.