SPARSE BAYESIAN LEARNING IN CLASSIFYING FACE FEATURE VECTORS

The Relevance Vector Machine (RVM), a Bayesian treatment of generalized linear model of identical functional form to the Support Vector Machine (SVM), is the recently developed machine learning framework capable of building simple models from large sets of candidate features. The paper describes the application of the RVM to a classification algorithm of face feature vectors, obtained by Eigenfaces method. Moreover, the results of the RVM classification are compared with those obtained by using both the Support Vector Machine method and the method based on the Euclidean distance.

[1]  Corinna Cortes,et al.  Support-Vector Networks , 1995, Machine Learning.

[2]  George Eastman House,et al.  Sparse Bayesian Learning and the Relevan e Ve tor Ma hine , 2001 .

[3]  Harry Wechsler,et al.  The FERET database and evaluation procedure for face-recognition algorithms , 1998, Image Vis. Comput..

[4]  Alex Pentland,et al.  Face recognition using eigenfaces , 1991, Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.

[5]  J. Mercer Functions of positive and negative type, and their connection with the theory of integral equations , 1909 .

[6]  Bhaskar D. Rao,et al.  Perspectives on Sparse Bayesian Learning , 2003, NIPS.

[7]  Michael E. Tipping The Relevance Vector Machine , 1999, NIPS.

[8]  Vladimir N. Vapnik,et al.  The Nature of Statistical Learning Theory , 2000, Statistics for Engineering and Information Science.