Principal component analysis (PCA), invented by Karl Pearson in 1901, is an algorithm that transforms data into uncorrelated orthogonal features called principal components. The principal components are the eigenvectors of the covariance matrix.
Sometimes, we get better results by scaling the data prior to applying PCA, although this is not strictly necessary. We can interpret PCA as projecting data to a lower dimensional space. Some of the principal components contribute relatively little information (low variance); therefore, we can omit them. We have the following transformation:
The result is the matrix TL, with the same number of rows as the original matrix but a lower number of columns.
Dimensionality reduction is, of course, useful for visualization and modeling and to reduce the chance of overfitting. In fact, there is a technique called Principal component regression (PCR), which uses this principle. In a nutshell, PCR...