In data-analysis problems with a large number of dimensions, the principal component analysis based on L2-norm (L2-PCA) is one of the most popular methods, but L2-PCA is sensitive to outliers. Unlike L2-PCA, PCA-L1 is robust to outliers because it utilizes the L1-norm, which is less sensitive to outliers; therefore, some studies have shown the superiority of PCA-L1 to L2-PCA [2][3]. However, PCA-L1 requires enormous computational cost to obtain the bases, because PCA-L1 employs an iterative algorithm, and initial bases are eigenvectors of autocorrelation matrix. The autocorrelation matrix in the PCA-L1 needs to be recalculated for the each basis besides. In previous works [3], the authors proposed a fast PCA-L1 algorithm providing identical bases in terms of theoretical approach, and decreased computational time roughly to a quarter. This paper attempts to accelerate the computation of the L1-PCA bases using GPU.
[1]
David J. Kriegman,et al.
From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
,
2001,
IEEE Trans. Pattern Anal. Mach. Intell..
[2]
Nojun Kwak,et al.
Principal Component Analysis Based on L1-Norm Maximization
,
2008,
IEEE Transactions on Pattern Analysis and Machine Intelligence.
[3]
David J. Kriegman,et al.
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
,
1996,
ECCV.
[4]
Yoshimitsu Kuroki,et al.
Fast Method of Principal Component Analysis Based on L1-Norm Maximization Algorithm
,
2009
.