Book Image

Data Science for Marketing Analytics

By : Tommy Blanchard, Debasish Behera, Pranshu Bhatnagar
Book Image

Data Science for Marketing Analytics

By: Tommy Blanchard, Debasish Behera, Pranshu Bhatnagar

Overview of this book

Data Science for Marketing Analytics covers every stage of data analytics, from working with a raw dataset to segmenting a population and modeling different parts of the population based on the segments. The book starts by teaching you how to use Python libraries, such as pandas and Matplotlib, to read data from Python, manipulate it, and create plots, using both categorical and continuous variables. Then, you'll learn how to segment a population into groups and use different clustering techniques to evaluate customer segmentation. As you make your way through the chapters, you'll explore ways to evaluate and select the best segmentation approach, and go on to create a linear regression model on customer value data to predict lifetime value. In the concluding chapters, you'll gain an understanding of regression techniques and tools for evaluating regression models, and explore ways to predict customer choice using classification algorithms. Finally, you'll apply these techniques to create a churn model for modeling customer product choices. By the end of this book, you will be able to build your own marketing reporting and interactive dashboard solutions.
Table of Contents (12 chapters)
Data Science for Marketing Analytics
Preface

Chapter 6: Other Regression Techniques and Tools for Evaluation


Activity 10: Testing Which Variables are Important for Predicting Responses to a Marketing Offer

  1. Import pandas, read in the data from offer_responses.csv, and use the head function to view the first five rows of the data:

    import pandas as pd
    
    df = pd.read_csv('offer_responses.csv')
    df.head() 
  2. Import train_test_split from sklearn and use it to split the data into a training and test set, using responses as the y variable and all others as the predictor (X) variables. Use random_state=10 for train_test_split:

    from sklearn.model_selection import train_test_split
    
    X = df[['offer_quality',
            'offer_discount',
            'offer_reach'
           ]]
    
    y = df['responses']
    
    X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 10)
  3. Import LinearRegression and mean_squared_error from sklearn. Fit a model to the training data (using all of the predictors), get predictions from the model on the test data, and print out the calculated RMSE on the test data:

    from sklearn.linear_model import LinearRegression
    from sklearn.metrics import mean_squared_error
    
    model = LinearRegression()
    model.fit(X_train,y_train)
    
    predictions = model.predict(X_test)
    
    print('RMSE with all variables: ' + str(mean_squared_error(predictions, y_test)**0.5))
  4. Create X_train2 and X_test2 by dropping offer_quality from X_train and X_test. Train and evaluate the RMSE of the model using X_train2 and X_test2:

    X_train2 = X_train.drop('offer_quality',axis=1)
    X_test2 = X_test.drop('offer_quality',axis=1)
    
    model = LinearRegression()
    model.fit(X_train2,y_train)
    
    predictions = model.predict(X_test2)
    
    print('RMSE without offer quality: ' + str(mean_squared_error(predictions, y_test)**0.5))
  5. Perform the same sequence of steps from step 4, but this time dropping offer_discount instead of offer_quality:

    X_train3 = X_train.drop('offer_discount',axis=1)
    X_test3 = X_test.drop('offer_discount',axis=1)
    
    model = LinearRegression()
    model.fit(X_train3,y_train)
    
    predictions = model.predict(X_test3)
    
    print('RMSE without offer discount: ' + str(mean_squared_error(predictions, y_test)**0.5))
  6. Perform the same sequence of steps, but this time dropping offer_reach:

    X_train4 = X_train.drop('offer_reach',axis=1)
    X_test4 = X_test.drop('offer_reach',axis=1)
    
    model = LinearRegression()
    model.fit(X_train4,y_train)
    
    predictions = model.predict(X_test4)
    
    print('RMSE without offer reach: ' + str(mean_squared_error(predictions, y_test)**0.5))

You should notice that the RMSE went up when offer_reach or offer_discount was removed from the model, but remained about the same when offer_quality was removed. This suggests that offer_quality isn't contributing to the accuracy of the model and could be safely removed to simplify the model.

Activity 11: Using Lasso Regression to Choose Features for Predicting Customer Spend

  1. Import pandas, use it to read the data in customer_spend.csv, and use the head function to view the first five rows of data:

    import pandas as pd
    
    df = pd.read_csv('customer_spend.csv')
    df.head()
  2. Use train_test_split from sklearn to split the data into training and test sets, with random_state=100 and cur_year_spend as the y variable:

    from sklearn.model_selection import train_test_split
    
    cols = df.columns[1:]
    X = df[cols]
    
    y = df['cur_year_spend']
    
    X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 100)
  3. Import Lasso from sklearn and fit a lasso model (with normalize=True and random_state=10) to the training data:

    from sklearn.linear_model import Lasso
    
    lasso_model = Lasso(normalize=True, random_state=10)
    lasso_model.fit(X_train,y_train)
  4. Get the coefficients from the lasso model, and store the names of the features that have non-zero coefficients along with their coefficient values in the selected_features and selected_coefs variables, respectively:

    coefs = lasso_model.coef_
    selected_features = cols[coefs > 0]
    selected_coefs = coefs[coefs > 0]
  5. Print out the names of the features with non-zero coefficients and their associated coefficient values using the following code:

    for coef, feature in zip(selected_coefs, selected_features):
        print(feature + ' coefficient: ' + str(coef))

From the output, we can see not only which variables are important, but also the effect that they have. For example, for each dollar a customer spent in the previous year, we can expect a customer to spend approximately $0.80 this year, everything else being equal.

Activity 12: Building the Best Regression Model for Customer Spend Based on Demographic Data

  1. Import pandas, read the data in spend_age_income_ed.csv into a DataFrame, and use the head function to view the first five rows of the data:

    import pandas as pd
    
    df = pd.read_csv('spend_age_income_ed.csv')
    df.head()
  2. Perform a train-test split, with random_state=10:

    from sklearn.model_selection import train_test_split
    
    X = df[['age','income','years_of_education']]
    y = df['spend']
    
    X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 10)
  3. Fit a linear regression model to the training data:

    from sklearn.linear_model import LinearRegression
    
    model = LinearRegression()
    model.fit(X_train,y_train)
  4. Fit two regression tree models to the data, one with max_depth=2 and one with max_depth=5:

    from sklearn.tree import DecisionTreeRegressor
    
    max2_tree_model = DecisionTreeRegressor(max_depth=2)
    max2_tree_model.fit(X_train,y_train)
    
    max5_tree_model = DecisionTreeRegressor(max_depth=5)
    max5_tree_model.fit(X_train,y_train)
  5. Fit two random forest models to the data, one with max_depth=2, one with max_depth=5, and random_state=10 for both:

    from sklearn.ensemble import RandomForestRegressor
    
    max2_forest_model = RandomForestRegressor(max_depth=2, random_state=10)
    max2_forest_model.fit(X_train,y_train)
    
    max5_forest_model = RandomForestRegressor(max_depth=5, random_state=10)
    max5_forest_model.fit(X_train,y_train)
  6. Calculate and print out the RMSE on the test data for all five models:

    from sklearn.metrics import mean_squared_error
    
    linear_predictions = model.predict(X_test)
    print('Linear model RMSE: ' + str(mean_squared_error(linear_predictions, y_test)**0.5))
    
    max2_tree_predictions = max2_tree_model.predict(X_test)
    print('Tree with max depth of 2 RMSE: ' + str(mean_squared_error(max2_tree_predictions, y_test)**0.5))
    
    max5_tree_predictions = max5_tree_model.predict(X_test)
    print('Tree with max depth of 5 RMSE: ' + str(mean_squared_error(max5_tree_predictions, y_test)**0.5))
    
    max2_forest_predictions = max2_forest_model.predict(X_test)
    print('Random Forest with max depth of 2 RMSE: ' + str(mean_squared_error(max2_forest_predictions, y_test)**0.5))
    
    max5_forest_predictions = max5_forest_model.predict(X_test)
    print('Random Forest with max depth of 5 RMSE: ' + str(mean_squared_error(max5_forest_predictions, y_test)**0.5))

We can see that, with this particular problem, a random forest with a max depth of 5 does best out of the models we tried. In general, it's good to try a few different types of models and values for hyperparameters to make sure you get the model that captures the relationships in the data well.