Real-world data is usually noisy and inconsistent with missing observations. No classification, regression, or clustering model can extract relevant information from raw data.
Data preprocessing consists of cleaning, filtering, transforming, and normalizing raw observations using statistics in order to correlate features or groups of features, identify trends and models, and filter out noise. The purpose of cleansing raw data is as follows:
To extract some basic knowledge from raw datasets
To evaluate the quality of data and generate clean datasets for unsupervised or supervised learning
You should not underestimate the power of traditional statistical analysis methods to infer and classify information from textual or unstructured data.
In this chapter, you will learn how to:
Apply commonly used moving average techniques to detect long-term trends in a time series
Identify market and sector cycles using discrete Fourier series
Leverage the discrete Kalman filter to...