-
Book Overview & Buying
-
Table Of Contents
Practical Data Analysis
By :
When you have a good understanding of a phenomenon, it is possible to make predictions about it. Data analysis helps us to make this possible through exploring the past and creating predictive models.
The data analysis process is composed of the following steps:
All these activities can be grouped as shown in the following figure:

The problem definition starts with high-level questions such as how to track differences in behavior between groups of customers, or what's going to be the gold price in the next month. Understanding the objectives and requirements from a domain perspective is the key to a successful data analysis project.
Types of data analysis questions are listed as follows:
Data preparation is about how to obtain, clean, normalize, and transform the data into an optimal dataset, trying to avoid any possible data quality issues such as invalid, ambiguous, out-of-range, or missing values. This process can take a lot of your time. In Chapter 2, Working with Data, we go into more detail about working with data, using OpenRefine to address the complicated tasks. Analyzing data that has not been carefully prepared can lead you to highly misleading results.
The characteristics of good data are listed as follows:
Data exploration is essentially looking at the data in a graphical or statistical form trying to find patterns, connections, and relations in the data. Visualization is used to provide overviews in which meaningful patterns may be found.
In Chapter 3, Data Visualization, we present a visualization framework (D3.js) and we implement some examples on how to use visualization as a data exploration tool.
Predictive modeling is a process used in data analysis to create or choose a statistical model trying to best predict the probability of an outcome. In this book, we use a variety of those models and we can group them in three categories based on its outcome:
|
Chapter |
Algorithm | |
|---|---|---|
|
Categorical outcome (Classification) |
4 |
Naïve Bayes Classifier |
|
11 |
Natural Language Toolkit + Naïve Bayes Classifier | |
|
Numerical outcome (Regression) |
6 |
Random Walk |
|
8 |
Support Vector Machines | |
|
9 |
Cellular Automata | |
|
8 |
Distance Based Approach + k-nearest neighbor | |
|
Descriptive modeling (Clustering) |
5 |
Fast Dynamic Time Warping (FDTW) + Distance Metrics |
|
10 |
Force Layout and Fruchterman-Reingold layout |
Another important task we need to accomplish in this step is evaluating the model we chose to be optimal for the particular problem.
The No Free Lunch Theorem proposed by Wolpert in 1996 stated:
"No Free Lunch theorems have shown that learning algorithms cannot be universally good."
The model evaluation helps us to ensure that our analysis is not over-optimistic or over-fitted. In this book, we are going to present two different ways to validate the model:
This is the final step in our analysis process and we need to answer the following questions:
How is it going to present the results?
For example, in tabular reports, 2D plots, dashboards, or infographics.
Where is it going to be deployed?
For example, in hard copy printed, poster, mobile devices, desktop interface, or web.
Each choice will depend on the kind of analysis and a particular data. In the following chapters, we will learn how to use standalone plotting in Python with matplotlib and web visualization with D3.js.
Change the font size
Change margin width
Change background colour