The principal components analysis technique requires the model to be linear. Although the study of such algorithms is beyond the scope of the book, it is worth mentioning two approaches that extend PCA for nonlinear models:
Kernel PCA
Manifold learning
PCA extracts a set of orthogonal linear projections of an array of correlated values X = {xi }. The kernel PCA algorithm consists of extracting a similar set of orthogonal projections of the inner product matrix XTX.
Non-linearity is supported by applying a kernel function to the inner product. Kernel functions are described in the Kernel functions section of Chapter 12, Kernel Models and Support Vector Machines. The kernel PCA is an attempt to extract a low dimension features set (or manifold) from the original observation space. The linear PCA is the projection on the tangent space of the manifold.