Preprocessing data using different techniques
In the real world, we usually have to deal with a lot of raw data. This raw data is not readily ingestible by machine learning algorithms. To prepare the data for machine learning, we have to preprocess it before we feed it into various algorithms.
Getting ready
Let's see how to preprocess data in Python. To start off, open a file with a .py
extension, for example, preprocessor.py
, in your favorite text editor. Add the following lines to this file:
import numpy as np from sklearn import preprocessing
We just imported a couple of necessary packages. Let's create some sample data. Add the following line to this file:
data = np.array([[3, -1.5, 2, -5.4], [0, 4, -0.3, 2.1], [1, 3.3, -1.9, -4.3]])
We are now ready to operate on this data.
How to do it…
Data can be preprocessed in many ways. We will discuss a few of the most commonly-used preprocessing techniques.
Mean removal
It's usually beneficial to remove the mean from each feature so that it's centered on zero. This helps us in removing any bias from the features. Add the following lines to the file that we opened earlier:
data_standardized = preprocessing.scale(data) print "\nMean =", data_standardized.mean(axis=0) print "Std deviation =", data_standardized.std(axis=0)
We are now ready to run the code. To do this, run the following command on your Terminal:
$ python preprocessor.py
You will see the following output on your Terminal:
Mean = [ 5.55111512e-17 -1.11022302e-16 -7.40148683e-17 -7.40148683e-17] Std deviation = [ 1. 1. 1. 1.]
You can see that the mean is almost 0
and the standard deviation is 1
.
Scaling
The values of each feature in a datapoint can vary between random values. So, sometimes it is important to scale them so that this becomes a level playing field. Add the following lines to the file and run the code:
data_scaler = preprocessing.MinMaxScaler(feature_range=(0, 1)) data_scaled = data_scaler.fit_transform(data) print "\nMin max scaled data =", data_scaled
After scaling, all the feature values range between the specified values. The output will be displayed, as follows:
Min max scaled data: [[ 1. 0. 1. 0. ] [ 0. 1. 0.41025641 1. ] [ 0.33333333 0.87272727 0. 0.14666667]]
Normalization
Data normalization is used when you want to adjust the values in the feature vector so that they can be measured on a common scale. One of the most common forms of normalization that is used in machine learning adjusts the values of a feature vector so that they sum up to 1. Add the following lines to the previous file:
data_normalized = preprocessing.normalize(data, norm='l1') print "\nL1 normalized data =", data_normalized
If you run the Python file, you will get the following output:
L1 normalized data: [[ 0.25210084 -0.12605042 0.16806723 -0.45378151] [ 0. 0.625 -0.046875 0.328125 ] [ 0.0952381 0.31428571 -0.18095238 -0.40952381]]
This is used a lot to make sure that datapoints don't get boosted artificially due to the fundamental nature of their features.
Binarization
Binarization is used when you want to convert your numerical feature vector into a Boolean vector. Add the following lines to the Python file:
data_binarized = preprocessing.Binarizer(threshold=1.4).transform(data) print "\nBinarized data =", data_binarized
Run the code again, and you will see the following output:
Binarized data: [[ 1. 0. 1. 0.] [ 0. 1. 0. 1.] [ 0. 1. 0. 0.]]
This is a very useful technique that's usually used when we have some prior knowledge of the data.
One Hot Encoding
A lot of times, we deal with numerical values that are sparse and scattered all over the place. We don't really need to store these big values. This is where One Hot Encoding comes into picture. We can think of One Hot Encoding as a tool to tighten the feature vector. It looks at each feature and identifies the total number of distinct values. It uses a one-of-k scheme to encode the values. Each feature in the feature vector is encoded based on this. This helps us be more efficient in terms of space. For example, let's say we are dealing with 4-dimensional feature vectors. To encode the n-th feature in a feature vector, the encoder will go through the n-th feature in each feature vector and count the number of distinct values. If the number of distinct values is k, it will transform the feature into a k-dimensional vector where only one value is 1 and all other values are 0. Add the following lines to the Python file:
encoder = preprocessing.OneHotEncoder() encoder.fit([[0, 2, 1, 12], [1, 3, 5, 3], [2, 3, 2, 12], [1, 2, 4, 3]]) encoded_vector = encoder.transform([[2, 3, 5, 3]]).toarray() print "\nEncoded vector =", encoded_vector
This is the expected output:
Encoded vector: [[ 0. 0. 1. 0. 1. 0. 0. 0. 1. 1. 0.]]
In the above example, let's consider the third feature in each feature vector. The values are 1
, 5
, 2
, and 4
. There are four distinct values here, which means the one-hot encoded vector will be of length 4. If you want to encode the value 5
, it will be a vector [0, 1, 0, 0]
. Only one value can be 1 in this vector. The second element is 1, which indicates that the value is 5
.