Book Image

Applied Supervised Learning with Python

By : Benjamin Johnston, Ishita Mathur
Book Image

Applied Supervised Learning with Python

By: Benjamin Johnston, Ishita Mathur

Overview of this book

Machine learning—the ability of a machine to give right answers based on input data—has revolutionized the way we do business. Applied Supervised Learning with Python provides a rich understanding of how you can apply machine learning techniques in your data science projects using Python. You'll explore Jupyter Notebooks, the technology used commonly in academic and commercial circles with in-line code running support. With the help of fun examples, you'll gain experience working on the Python machine learning toolkit—from performing basic data cleaning and processing to working with a range of regression and classification algorithms. Once you’ve grasped the basics, you'll learn how to build and train your own models using advanced techniques such as decision trees, ensemble modeling, validation, and error metrics. You'll also learn data visualization techniques using powerful Python libraries such as Matplotlib and Seaborn. This book also covers ensemble modeling and random forest classifiers along with other methods for combining results from multiple models, and concludes by delving into cross-validation to test your algorithm and check how well the model works on unseen data. By the end of this book, you'll be equipped to not only work with machine learning algorithms, but also be able to create some of your own!
Table of Contents (9 chapters)

Chapter 2: Exploratory Data Analysis and Visualization


Activity 2: Summary Statistics and Missing Values

Solution

The steps to complete this activity are as follows:

  1. Read the data. Use pandas' .read_csv method to read the CSV file into a pandas DataFrame:

    data = pd.read_csv('house_prices.csv')
  2. Use pandas' .info() and .describe() methods to view the summary statistics of the dataset:

    data.info()
    data.describe().T

    The output of info() will be:

    Figure 2.39: The output of the info() method

    The output of describe() will be:

    Figure 2.40: The output of the describe() method

  3. Find the total count and total percentage of missing values in each column of the DataFrame and display them for columns having at least one null value, in descending order of missing percentages.

    As we did in Exercise 12: Visualizing Missing Values, we will use the .isnull() function on the DataFrame to get a mask, find the count of null values in each column by using the .sum() function over the mask DataFrame and the fraction of null values by using .mean() over the mask DataFrame and multiply by 100 to convert it to a percentage. Then, we use pd.concat() to combine the total and percentage of null values into a single DataFrame and sort the rows by percentage of missing values:

    mask = data.isnull()
    total = mask.sum()
    percent = 100*mask.mean()
    
    missing_data = pd.concat([total, percent], axis=1,join='outer',
                   keys=['count_missing', 'perc_missing'])
    missing_data.sort_values(by='perc_missing', ascending=False, inplace=True)
    
    missing_data[missing_data.count_missing > 0]

    The output will be:

    Figure 2.41: Total count and percentage of missing values in each column

  4. Plot the nullity matrix and nullity correlation heatmap. First, we find the list of column names for those having at least one null value. Then, we use the missingno library to plot the nullity matrix (as we did in Exercise 12: Visualizing Missing Values) for a sample of 500 points, and the nullity correlation heatmap for the data in those columns:

    nullable_columns = data.columns[mask.any()].tolist()
    msno.matrix(data[nullable_columns].sample(500))
    plt.show()
    
    msno.heatmap(data[nullable_columns], figsize=(18,18))
    plt.show()

    The nullity matrix will look like this:

    Figure 2.42: Nullity matrix

    The nullity correlation heatmap will look like this:

    Figure 2.43: Nullity correlation heatmap

  5. Delete the columns having more than 80% of values missing. Use the .loc operator on the DataFrame we created in Step 3 to select only those columns that had less than 80% of values missing:

    data = data.loc[:,missing_data[missing_data.perc_missing < 80].index]
  6. Replace null values in the FireplaceQu column with NA values. Use the .fillna() method to replace null values with the NA string:

    data['FireplaceQu'] = data['FireplaceQu'].fillna('NA')

Activity 3: Visually Representing the Distribution of Values

Solution

  1. Plot a histogram using Matplotlib for the target variable, SalePrice. First, we initialize the figure using the plt.figure command and set the figure size. Then, we use Matplotlib's .hist() function as our primary plotting function, to which we pass the SalePrice series object for plotting the histogram. Lastly, we specify the axes labels and show the plot:

    plt.figure(figsize=(8,6))
    
    plt.hist(data.SalePrice, bins=range(0,800000,50000))
    
    plt.ylabel('Number of data points')
    plt.xlabel('SalePrice')
    plt.show()

    The output will be as follows:

    Figure 2.44: Histogram for the target variable

  2. Find the number of unique values within each column having the object type. Create a new DataFrame called object_variables by using the .select_dtypes function on the original DataFrame to select those columns with the numpy.object data type. Then, find the number of unique values for each column in this DataFrame by using the .nunique() function, and sort the resultant series:

    object_variables = data.select_dtypes(include=[np.object])
    object_variables.nunique().sort_values()

    The output will be:

    Figure 2.45: Number of unique values within each column having the object type

  3. Create a DataFrame representing the number of occurrences for each categorical value in the HouseStyle column. Use the .value_counts() function to calculate the frequencies of each value in decreasing order in the form of a pandas series, then reset the index to give us a DataFrame and sort the values by the index:

    counts = data.HouseStyle.value_counts(dropna=False)
    counts.reset_index().sort_values(by='index')

    The output will be:

    Figure 2.46: Number of occurrences for each categorical value in the HouseStyle column

  4. Plot a pie chart representing these counts. As in Step 1, we initialize the image using plt.figure() and use the plt.title() and plt.show() methods to set the figure title and display it respectively. The primary plotting function used is plt.pie(), to which we pass the series we created in the previous step:

    plt.figure(figsize=(10,10))
    plt.pie(counts, labels=counts.index)
    plt.title('Pie chart showing counts for\nHouseStyle categories')
    plt.show()

    The output will be as follows:

    Figure 2.47: Pie chart representing the counts

  5. Find the number of unique values within each column having the number type. As done in Step 2, now select columns having the numpy.number data type and find the number of unique values in each column using .nunique(). Sort the resultant series in descending order:

    numeric_variables = data.select_dtypes(include=[np.number])
    numeric_variables.nunique().sort_values(ascending=False)

    The output will be as follows:

    Figure 2.48: Number of unique values within each column having the number type

  6. Plot a histogram using Seaborn for the LotArea variable. Use Seaborn's .distplot() function as the primary plotting function, to which the LotArea series in the DataFrame needs to be passed (without any null values; use .dropna() on the series to remove them). To improve the plot view, also set the bins parameter and specify the X axis limits using plt.xlim():

    plt.figure(figsize=(10,7))
    sns.distplot(data.LotArea.dropna(), , bins=range(0,100000,1000))
    plt.xlim(0,100000)
    plt.show()

    The output will be as follows:

    Figure 2.49: Histogram for the LotArea variable

  7. Calculate the skew and kurtosis values for the values in each column:

    data.skew().sort_values()
    data.kurt()

    The output for skew values will be:

    Figure 2.50: Skew values for each column

    The output for kurtosis values will be:

    Figure 2.51: Kurtosis values for each column

Activity 4: Relationships Within the Data

Solution

  1. Plot the correlation heatmap for the dataset. As we did in Exercise 23: Correlation Heatmap, plot the heatmap using Seaborn's .heatmap() function and pass the feature correlation matrix (as determined by using pandas' .corr() function on the DataFrame). Additionally, set the color map to RdBu using the cmap parameter and the minimum and maximum values on the color scale to -1 and 1 using the vmin and vmax parameters respectively:

    plt.figure(figsize = (12,10))
    sns.heatmap(data.corr(), square=True, cmap="RdBu", vmin=-1, vmax=1)
    plt.show()

    The output will be:

    Figure 2.52: Heatmap for the dataset

  2. Plot a more compact heatmap having annotations for correlation values using the following subset of features:

    feature_subset = [
        'GarageArea', 'GarageCars','GarageCond','GarageFinish','GarageQual','GarageType',
        'GarageYrBlt','GrLivArea','LotArea','MasVnrArea','SalePrice'
    ]

    Now do the same as in the previous step, this time selecting only the above columns in the dataset, and adding an annot parameter with the True value to the primary plotting function, all else remaining the same:

    plt.figure(figsize = (12,10))
    sns.heatmap(data[feature_subset].corr(), square=True, annot=True, cmap="RdBu", vmin=-1, vmax=1)
    plt.show()

    The output will be as follows:

    Figure 2.53: Heatmap with annotations for correlation values

  3. Display the pairplot for the same subset of features, with the KDE plot on the diagonals and scatter plot elsewhere. Use Seaborn's .pairplot() function to plot the pairplot for the non-null values in the selected columns of the DataFrame. To make the diagonal plots KDE plots, pass kde to the diag_kind parameter and to set all other plots as scatter plots, pass scatter to the kind parameter:

    sns.pairplot(data[feature_subset].dropna(), kind ='scatter', diag_kind='kde')
    plt.show()

    The output will be:

    Figure 2.54: Pairplot for the same subset of features

  4. Create a boxplot to show the variation in SalePrice for each category of GarageCars. The primary plotting function used here will be Seaborn's .boxplot() function, to which we pass the DataFrame along with parameters x and y, the former is the categorical variable and the latter is the continuous variable over which we want to see the variation within each category, that is, GarageCars and SalePrice respectively:

    plt.figure(figsize=(10, 10))
    sns.boxplot(x='GarageCars', y="SalePrice", data=data)
    plt.show()

    The output will be as follows:

    Figure 2.55: Boxplot showing variation in SalePrice for each category of GarageCars

  5. Plot a line graph using Seaborn to show the variation in SalePrice for older and more recently built flats. Here, we will plot a line plot using Seaborn's .lineplot() function. Since we want to see the variation in SalePrice, we take this as the y variable, and as the variation is across a period of time, we take YearBuilt as the x variable. Keeping this in mind, we pass the respective series as values to the y and x parameters for the primary plotting function. We also pass a ci=None parameter to hide the standard deviation indicator around the line in the plot:

    plt.figure(figsize=(10,7))
    sns.lineplot(x=data.YearBuilt, y=data.SalePrice, ci=None)
    plt.show()

    The output will be as follows:

    Figure 2.56: Line graph showing the variation in SalePrice for older and more recently built flats