Dimensionality Reduction for Distributed Vision Systems Using Random Projection

Dimensionality reduction is an important issue in the context of distributed vision systems. Processing of dimensionality reduced data requires far less network resources (e.g., storage space, network bandwidth) than processing of original data. In this paper we explore the performance of the random projection method for distributed smart cameras. In our tests, random projection is compared to principal component analysis in terms of recognition efficiency (i.e., object recognition). The results obtained on the COIL-20 image data set show good performance of the random projection in comparison to the principal component analysis, which requires distribution of a subspace and therefore consumes more resources of the network. This indicates that random projection method can elegantly solve the problem of subspace distribution in embedded and distributed vision systems. Moreover, even without explicit orthogonalization or normalization of random projection transformation subspace, the method achieves good object recognition efficiency.

[1]  Andreas E. Savakis,et al.  A random projections model for object tracking under variable pose and multi-camera views , 2009, 2009 Third ACM/IEEE International Conference on Distributed Smart Cameras (ICDSC).

[2]  Jon Kleinberg,et al.  Proceedings of the thirty-eighth annual ACM symposium on Theory of computing , 2006, STOC 2006.

[3]  Samuel Kaski,et al.  Dimensionality reduction by random mapping: fast similarity computation for clustering , 1998, 1998 IEEE International Joint Conference on Neural Networks Proceedings. IEEE World Congress on Computational Intelligence (Cat. No.98CH36227).

[4]  Dimitris Achlioptas,et al.  Database-friendly random projections , 2001, PODS.

[5]  Dimitris Achlioptas,et al.  Database-friendly random projections: Johnson-Lindenstrauss with binary coins , 2003, J. Comput. Syst. Sci..

[6]  Santosh S. Vempala,et al.  The Random Projection Method , 2005, DIMACS Series in Discrete Mathematics and Theoretical Computer Science.

[7]  George Bebis,et al.  Face recognition experiments with random projection , 2005, SPIE Defense + Commercial Sensing.

[8]  Dmitriy Fradkin,et al.  Experiments with random projections for machine learning , 2003, KDD '03.

[9]  Bernard Chazelle,et al.  Approximate nearest neighbors and the fast Johnson-Lindenstrauss transform , 2006, STOC '06.

[10]  Dimitrios Gunopulos,et al.  Dimensionality reduction by random projection and latent semantic indexing , 2003 .

[11]  Takuji Nishimura,et al.  Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator , 1998, TOMC.

[12]  W. B. Johnson,et al.  Extensions of Lipschitz mappings into Hilbert space , 1984 .

[13]  Sanjoy Dasgupta,et al.  Experiments with Random Projection , 2000, UAI.

[14]  Ralf Möller,et al.  First-order approximation of Gram-Schmidt orthonormalization beats deflation in coupled PCA learning rules , 2006, Neurocomputing.