Big data has many divergent types of sources, from physical (sensor/IoT) to social and cyber (web) types, rendering it messy and, imprecise, and incomplete. Due to its quantitative (volume and velocity) and qualitative (variety) challenges, big data to the users resembles something like “the elephant to the blind men”. It is imperative to enact a major paradigm shift in data mining and learning tools so that information from diversified sources must be integrated together to unravel information hidden in the massive and messy big data, so that, metaphorically speaking, it would let the blind men “see” the elephant. This talk will address yet another vital “V”-paradigm: “Visualization”. Visualization tools are meant to supplement (instead of replace) the domain expertise (e.g. a cardiologist) and provide a big picture to help users formulate critical questions and subsequently postulate heuristic and insightful answers. For big data, the curse of high feature dimensionality is causing grave concerns on computational complexity and over-training. In this talk, we shall explore various projection methods for dimension reduction - a prelude to visualization of vectorial and non-vectorial data. A popular visualization tool for unsupervised learning is Principal Component Analysis (PCA). PCA aims at the best recoverability of the original data in the Euclidean Vector Space (EVS). However, PCA is not effective for supervised and collaborative learning environment. Discriminant Component Analysis (DCA), basically a supervised PCA, can be derived via a notion of Canonical Vector Space (CVS). The signal subspace components of DCA are associated with the discriminant distance/power (related to the classification effectiveness) while the noise subspace components of DCA are tightly coupled with the recoverability and/or privacy protection. DCA enjoys two major merits: First, because the rank of the signal subspace is limited by the number of classes, DCA can effectively support classification using a relatively small dimensionality (i.e. high compression). Second, in DCA, the eigenvalues of the noise-space are ordered according to their corresponding reconstruction errors and can thus be used to control recoverability or anti-recoverability by applying respectively an negative or positive ridge. Via DCA, individual data can be highly compressed before being uploaded to the cloud, and thus better enabling privacy protection. In many practical scenarios, additional privacy protection can be incorporated by allowing individual participants to selectively hide some personal features. The classification of masked data calls for a kernel approach to Incomplete Data Analysis (KAIDA). More specifically, we extend PCA/DCA to their kernel variants. The success of kernel machines hinges upon the kernel function adopted to characterize the similarity of pairs of partially-specified vectors. Simulations on the HAR dataset confirm that DCA far outperforms PCA, both in their conventional or kernelized variants. For the latter, the visualization/classification results suggest favorable performance by the proposed partial correlation kernels over the imputed RBF kernel. In addition, the visualization results further points to a potentially promising approach via multiple kernels such as combining an imputed Gaussian RBF kernel and a non-imputed partial correlation kernel.
[1]
A. Tikhonov.
On the stability of inverse problems
,
1943
.
[2]
David Zhang,et al.
An efficient method for computing orthogonal discriminant vectors
,
2010,
Neurocomputing.
[3]
M. Aizerman,et al.
Theoretical Foundations of the Potential Function Method in Pattern Recognition Learning
,
1964
.
[4]
Gene H. Golub,et al.
Matrix computations (3rd ed.)
,
1996
.
[5]
Bernhard Schölkopf,et al.
Nonlinear Component Analysis as a Kernel Eigenvalue Problem
,
1998,
Neural Computation.
[6]
Arthur E. Hoerl,et al.
Ridge Regression: Biased Estimation for Nonorthogonal Problems
,
2000,
Technometrics.
[7]
Sun-Yuan Kung,et al.
A classification scheme for ‘high-dimensional-small-sample-size’ data using soda and ridge-SVM with microwave measurement applications
,
2013,
2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
[8]
R. Fisher.
THE USE OF MULTIPLE MEASUREMENTS IN TAXONOMIC PROBLEMS
,
1936
.
[9]
H. Hotelling.
Analysis of a complex of statistical variables into principal components.
,
1933
.
[10]
Shingo Tomita,et al.
An optimal orthonormal system for discriminant analysis
,
1985,
Pattern Recognit..
[11]
J. Mercer.
Functions of Positive and Negative Type, and their Connection with the Theory of Integral Equations
,
1909
.
[12]
B. Parlett.
The Symmetric Eigenvalue Problem
,
1981
.
[13]
C. R. Rao,et al.
The Utilization of Multiple Measurements in Problems of Biological Classification
,
1948
.
[14]
Richard O. Duda,et al.
Pattern classification and scene analysis
,
1974,
A Wiley-Interscience publication.
[15]
Lena Osterhagen,et al.
Multiple Imputation For Nonresponse In Surveys
,
2016
.
[16]
Takeo Kanade,et al.
Discriminative cluster analysis
,
2006,
ICML.
[17]
Vladimir N. Vapnik,et al.
The Nature of Statistical Learning Theory
,
2000,
Statistics for Engineering and Information Science.
[18]
Ramesh Govindan,et al.
Cloud-enabled privacy-preserving collaborative learning for mobile sensing
,
2012,
SenSys '12.
[19]
Yadira Espinal.
Viktor Mayer-Schonberger and Kenneth Cukier, Big Data: A Revolution That Will Transform How We Live, Work and Think
,
2013
.
[20]
Anthony Widjaja,et al.
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
,
2003,
IEEE Transactions on Neural Networks.
[21]
Viktor Mayer-Schnberger,et al.
Big Data: A Revolution That Will Transform How We Live, Work, and Think
,
2013
.
[22]
Mario Bertero,et al.
The Stability of Inverse Problems
,
1980
.
[23]
Rama Chellappa,et al.
Robust image based face recognition
,
2000,
Proceedings 2000 International Conference on Image Processing (Cat. No.00CH37101).
[24]
S. Kung.
Kernel Methods and Machine Learning
,
2014
.