Book Image

Data Science for Marketing Analytics

By : Tommy Blanchard, Debasish Behera, Pranshu Bhatnagar
Book Image

Data Science for Marketing Analytics

By: Tommy Blanchard, Debasish Behera, Pranshu Bhatnagar

Overview of this book

Data Science for Marketing Analytics covers every stage of data analytics, from working with a raw dataset to segmenting a population and modeling different parts of the population based on the segments. The book starts by teaching you how to use Python libraries, such as pandas and Matplotlib, to read data from Python, manipulate it, and create plots, using both categorical and continuous variables. Then, you'll learn how to segment a population into groups and use different clustering techniques to evaluate customer segmentation. As you make your way through the chapters, you'll explore ways to evaluate and select the best segmentation approach, and go on to create a linear regression model on customer value data to predict lifetime value. In the concluding chapters, you'll gain an understanding of regression techniques and tools for evaluating regression models, and explore ways to predict customer choice using classification algorithms. Finally, you'll apply these techniques to create a churn model for modeling customer product choices. By the end of this book, you will be able to build your own marketing reporting and interactive dashboard solutions.
Table of Contents (12 chapters)
Data Science for Marketing Analytics
Preface

Chapter 3: Unsupervised Learning: Customer Segmentation


Activity 3: Loading, Standardizing, and Calculating Distance with a Dataset

  1. Load the data from the customer_interactions.csv file into a pandas DataFrame and look at the first five rows of data:

    import pandas as pd
    df = pd.read_csv('customer_interactions.csv')
    df.head()
  2. Calculate the Euclidean distance between the first two data points in the DataFrame using the following code:

    import math
    math.sqrt((df.loc[0, 'spend'] - df.loc[1, 'spend'])**2 + (df.loc[0, 'interactions'] - df.loc[1, 'interactions'])**2)

    Note

    There are other, more concise methods for calculating distance, including using the SciPy package. Since we are doing it here for pedagogical reasons, we have used the most explicit method.

  3. Calculate the standardized values of the variables and store them in new columns named z_spend and z_interactions. Use df.head() to look at the first five rows of data:

    df['z_spend'] = (df['spend'] - df['spend'].mean())/df['spend'].std()
    df['z_interactions'] = (df['interactions'] - df['interactions'].mean())/df['interactions'].std()
    df.head()
  4. Calculate the distance between the first two data points using the standardized values:

    math.sqrt((df.loc[0, 'z_spend'] - df.loc[1, 'z_spend'])**2 + (df.loc[0, 'z_interactions'] - df.loc[1, 'z_interactions'])**2)

Activity 4: Using k-means Clustering on Customer Behavior Data

  1. Read in the data in the customer_offers.csv file, and set the customer_name column to the index:

    import pandas as pd
    
    customer_offers = pd.read_csv('customer_offers.csv')
    customer_offers = customer_offers.set_index('customer_name')
  2. Perform k-means clustering with three clusters, and save the cluster each data point is assigned to:

    from sklearn import cluster
    
    model = cluster.KMeans(n_clusters=3, random_state=10)
    cluster = model.fit_predict(customer_offers)
    offer_cols = customer_offers.columns
    customer_offers['cluster'] = cluster
  3. Use PCA to visualize the clusters:

    from sklearn import decomposition
    import matplotlib.pyplot as plt
    %matplotlib inline
    
    pca = decomposition.PCA(n_components=2)
    customer_offers['pc1'], customer_offers['pc2'] = zip(*pca.fit_transform(customer_offers[offer_cols]))
    
    colors = ['r', 'b', 'k', 'g']
    markers = ['^', 'o', 'd', 's']
    
    for c in customer_offers['cluster'].unique():
      d = customer_offers[customer_offers['cluster'] == c]
      plt.scatter(d['pc1'], d['pc2'], marker=markers[c], color=colors[c])
    
    plt.show()
  4. For each cluster, investigate how they differ from the average in each of our features. In other words, find how much customers in each cluster differ from the average proportion of times they responded to an offer. Plot these differences in a bar chart:

    total_proportions = customer_offers[offer_cols].mean()
    for i in range(3):
      plt.figure(i)
      cluster_df = customer_offers[customer_offers['cluster'] == i]
      cluster_proportions = cluster_df[offer_cols].mean()
    
      diff = cluster_proportions - total_proportions
      plt.bar(range(1, 33), diff)
  5. Load the information about what the offers were from offer_info.csv. For each cluster, find the five offers where the cluster differs most from the mean, and print out the varietal of those offers:

    offer_info = pd.read_csv('offer_info.csv')
    for i in range(3):
      cluster_df = customer_offers[customer_offers['cluster'] == i]
      cluster_proportions = cluster_df[offer_cols].mean()
    
      diff = cluster_proportions - total_proportions
      cluster_rep_offers = list(diff.sort_values(ascending=False).index.astype(int)[0:5])
      print(offer_info.loc[offer_info['offer_id'].isin(cluster_rep_offers),'varietal'])

From Figure 3.24 (which shows the top five offers for each cluster), you will notice that most of the wines in the first list are champagne, and the one that isn't is Prosecco, a type of sparkling wine closely related to champagne. Similarly, the last cluster contains mostly Pinot Noir, and one Malbec, which is a red wine similar to Pinot Noir. The second cluster might contain customers who care less about the specific type of wine, since it contains a white, a red, and three sparkling wines. This might indicate that the first group would be a good target in the future for offers involving champagne, and the second group might be a good target for offers involving red wines.