Book Image

The Data Analysis Workshop

By : Gururajan Govindan, Shubhangi Hora, Konstantin Palagachev
Book Image

The Data Analysis Workshop

By: Gururajan Govindan, Shubhangi Hora, Konstantin Palagachev

Overview of this book

Businesses today operate online and generate data almost continuously. While not all data in its raw form may seem useful, if processed and analyzed correctly, it can provide you with valuable hidden insights. The Data Analysis Workshop will help you learn how to discover these hidden patterns in your data, to analyze them, and leverage the results to help transform your business. The book begins by taking you through the use case of a bike rental shop. You'll be shown how to correlate data, plot histograms, and analyze temporal features. As you progress, you’ll learn how to plot data for a hydraulic system using the Seaborn and Matplotlib libraries, and explore a variety of use cases that show you how to join and merge databases, prepare data for analysis, and handle imbalanced data. By the end of the book, you'll have learned different data analysis techniques, including hypothesis testing, correlation, and null-value imputation, and will have become a confident data analyst.
Table of Contents (12 chapters)
Preface
7
7. Analyzing the Heart Disease Dataset
9
9. Analysis of the Energy Consumed by Appliances

Importing the Data

To begin with the actual data analysis, we need to import a few necessary packages. As one of these packages requires installation, the first step is to run the following in Anaconda Prompt:

conda install -c conda-forge imbalanced-learn

You can then proceed with the imports in your Jupyter Notebook:

import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn import preprocessing
from sklearn.preprocessing import RobustScaler
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import Normalizer
from imblearn.over_sampling import SMOTE
from sklearn.model_selection import train_test_split
import warnings
warnings.filterwarnings("ignore")

Next, import the dataset into the work environment:

df= pd.read_csv("https://raw.githubusercontent.com/"\
               ...