**Solution**

The steps to complete this activity are as follows:

Read the data. Use pandas'

**.read_csv**method to read the CSV file into a pandas DataFrame:data = pd.read_csv('house_prices.csv')

Use pandas'

**.info()**and**.describe()**methods to view the summary statistics of the dataset:data.info() data.describe().T

The output of

**info()**will be:The output of

**describe()**will be:Find the total count and total percentage of missing values in each column of the DataFrame and display them for columns having at least one null value, in descending order of missing percentages.

As we did in

*Exercise 12: Visualizing Missing Values*, we will use the**.isnull()**function on the DataFrame to get a mask, find the count of null values in each column by using the**.sum()**function over the mask DataFrame and the fraction of null values by using**.mean()**over the mask DataFrame and multiply by 100 to convert it to a percentage. Then, we use**pd.concat()**to combine the total and percentage of null values into a single DataFrame and sort the rows by percentage of missing values:mask = data.isnull() total = mask.sum() percent = 100*mask.mean() missing_data = pd.concat([total, percent], axis=1,join='outer', keys=['count_missing', 'perc_missing']) missing_data.sort_values(by='perc_missing', ascending=False, inplace=True) missing_data[missing_data.count_missing > 0]

The output will be:

Plot the nullity matrix and nullity correlation heatmap. First, we find the list of column names for those having at least one null value. Then, we use the

**missingno**library to plot the nullity matrix (as we did in*Exercise 12: Visualizing Missing Values*) for a sample of 500 points, and the nullity correlation heatmap for the data in those columns:nullable_columns = data.columns[mask.any()].tolist() msno.matrix(data[nullable_columns].sample(500)) plt.show() msno.heatmap(data[nullable_columns], figsize=(18,18)) plt.show()

The nullity matrix will look like this:

The nullity correlation heatmap will look like this:

Delete the columns having more than 80% of values missing. Use the

**.loc**operator on the DataFrame we created in*Step 3*to select only those columns that had less than 80% of values missing:data = data.loc[:,missing_data[missing_data.perc_missing < 80].index]

Replace null values in the

**FireplaceQu**column with NA values. Use the**.fillna()**method to replace null values with the**NA**string:data['FireplaceQu'] = data['FireplaceQu'].fillna('NA')

**Solution**

Plot a histogram using Matplotlib for the target variable,

**SalePrice**. First, we initialize the figure using the**plt.figure**command and set the figure size. Then, we use Matplotlib's**.hist()**function as our primary plotting function, to which we pass the**SalePrice**series object for plotting the histogram. Lastly, we specify the axes labels and show the plot:plt.figure(figsize=(8,6)) plt.hist(data.SalePrice, bins=range(0,800000,50000)) plt.ylabel('Number of data points') plt.xlabel('SalePrice') plt.show()

The output will be as follows:

Find the number of unique values within each column having the object type. Create a new DataFrame called

**object_variables**by using the**.select_dtypes**function on the original DataFrame to select those columns with the**numpy.object**data type. Then, find the number of unique values for each column in this DataFrame by using the**.nunique()**function, and sort the resultant series:object_variables = data.select_dtypes(include=[np.object]) object_variables.nunique().sort_values()

The output will be:

Create a DataFrame representing the number of occurrences for each categorical value in the

**HouseStyle**column. Use the**.value_counts()**function to calculate the frequencies of each value in decreasing order in the form of a pandas series, then reset the index to give us a DataFrame and sort the values by the index:counts = data.HouseStyle.value_counts(dropna=False) counts.reset_index().sort_values(by='index')

The output will be:

Plot a pie chart representing these counts. As in

*Step 1*, we initialize the image using**plt.figure()**and use the**plt.title()**and**plt.show()**methods to set the figure title and display it respectively. The primary plotting function used is**plt.pie()**, to which we pass the series we created in the previous step:plt.figure(figsize=(10,10)) plt.pie(counts, labels=counts.index) plt.title('Pie chart showing counts for\nHouseStyle categories') plt.show()

The output will be as follows:

Find the number of unique values within each column having the number type. As done in

*Step 2*, now select columns having the**numpy.number**data type and find the number of unique values in each column using**.nunique()**. Sort the resultant series in descending order:numeric_variables = data.select_dtypes(include=[np.number]) numeric_variables.nunique().sort_values(ascending=False)

The output will be as follows:

Plot a histogram using Seaborn for the

**LotArea**variable. Use Seaborn's**.distplot()**function as the primary plotting function, to which the**LotArea**series in the DataFrame needs to be passed (without any null values; use**.dropna()**on the series to remove them). To improve the plot view, also set the**bins**parameter and specify the*X*axis limits using**plt.xlim()**:plt.figure(figsize=(10,7)) sns.distplot(data.LotArea.dropna(), , bins=range(0,100000,1000)) plt.xlim(0,100000) plt.show()

The output will be as follows:

Calculate the skew and kurtosis values for the values in each column:

data.skew().sort_values() data.kurt()

The output for skew values will be:

The output for kurtosis values will be:

**Solution**

Plot the correlation heatmap for the dataset. As we did in

*Exercise 23: Correlation Heatmap*, plot the heatmap using Seaborn's**.heatmap()**function and pass the feature correlation matrix (as determined by using pandas'**.corr()**function on the DataFrame). Additionally, set the color map to**RdBu**using the**cmap**parameter and the minimum and maximum values on the color scale to**-1**and**1**using the**vmin**and**vmax**parameters respectively:plt.figure(figsize = (12,10)) sns.heatmap(data.corr(), square=True, cmap="RdBu", vmin=-1, vmax=1) plt.show()

The output will be:

Plot a more compact heatmap having annotations for correlation values using the following subset of features:

feature_subset = [ 'GarageArea', 'GarageCars','GarageCond','GarageFinish','GarageQual','GarageType', 'GarageYrBlt','GrLivArea','LotArea','MasVnrArea','SalePrice' ]

Now do the same as in the previous step, this time selecting only the above columns in the dataset, and adding an

**annot**parameter with the**True**value to the primary plotting function, all else remaining the same:plt.figure(figsize = (12,10)) sns.heatmap(data[feature_subset].corr(), square=True, annot=True, cmap="RdBu", vmin=-1, vmax=1) plt.show()

The output will be as follows:

Display the pairplot for the same subset of features, with the KDE plot on the diagonals and scatter plot elsewhere. Use Seaborn's

**.pairplot()**function to plot the pairplot for the non-null values in the selected columns of the DataFrame. To make the diagonal plots KDE plots, pass**kde**to the**diag_kind**parameter and to set all other plots as scatter plots, pass**scatter**to the**kind**parameter:sns.pairplot(data[feature_subset].dropna(), kind ='scatter', diag_kind='kde') plt.show()

The output will be:

Create a boxplot to show the variation in

**SalePrice**for each category of**GarageCars**. The primary plotting function used here will be Seaborn's**.boxplot()**function, to which we pass the DataFrame along with parameters**x**and**y**, the former is the categorical variable and the latter is the continuous variable over which we want to see the variation within each category, that is,**GarageCars**and**SalePrice**respectively:plt.figure(figsize=(10, 10)) sns.boxplot(x='GarageCars', y="SalePrice", data=data) plt.show()

The output will be as follows:

Plot a line graph using Seaborn to show the variation in

**SalePrice**for older and more recently built flats. Here, we will plot a line plot using Seaborn's**.lineplot()**function. Since we want to see the variation in**SalePrice**, we take this as the*y*variable, and as the variation is across a period of time, we take**YearBuilt**as the*x*variable. Keeping this in mind, we pass the respective series as values to the**y**and**x**parameters for the primary plotting function. We also pass a**ci=None**parameter to hide the standard deviation indicator around the line in the plot:plt.figure(figsize=(10,7)) sns.lineplot(x=data.YearBuilt, y=data.SalePrice, ci=None) plt.show()

The output will be as follows: