The Kernel PCA is an algorithm that not only keeps the main spirit of PCA as it is, but goes a step further to make use of the kernel trick so that it is operational for non-linear data:
- Let's define the covariance matrix of the data in the feature space, which is the product of the mapping function and the transpose of the mapping function:
It is similar to the one we used for PCA.
- The next step is to solve the following equation so that we can compute principal components:
Here, CF is the covariance matrix of the data in feature space, v is the eigenvector, and λ (lambda) is the eigenvalues.
- Let's put the value of step 1 into step 2 – that is, the value of CF in the equation of step 2. The eigenvector will be as follows:
Here,
is a scalar number.
- Now, let's add the kernel function into the equation. Let's multiply Φ(xk) on both sides of the formula, :
- Let's put the value of v from the equation in step 3 into the equation of step 4, as follows:
- Now, we call K . Upon simplifying the...