Learning by focusing: A new framework for concept recognition and feature selection

In this paper, we develop a new method for feature selection and category learning. We first introduce two observations from our experiments: (1) It is easier to distinguish two concepts than to learn an isolated concept. (2) To distinguish different concept pairs we can find different selections of optimal features. These two observations may partly explain the success of human vision learning, especially why an infant can simultaneously capture distinguished visual features when learning new concepts. Based on these two observations, we developed a new learning-by-focusing method which first constructs focalized concept discriminators for pairs of concepts, and then builds nonlinear classifiers using the discrimination scores. We build datasets for four concept structure: vehicle, human affliction, sports, and animals, and experiments on all the four datasets verify the success of our new approach.

[1]  Subhransu Maji,et al.  Classification using intersection kernel support vector machines is efficient , 2008, 2008 IEEE Conference on Computer Vision and Pattern Recognition.

[2]  Bastian Leibe,et al.  Visual Object Recognition , 2011, Visual Object Recognition.

[3]  S. Tobias Interest, Prior Knowledge, and Learning , 1994 .

[4]  G. Murphy,et al.  Category learning with minimal prior knowledge. , 2000, Journal of experimental psychology. Learning, memory, and cognition.

[5]  Chih-Jen Lin,et al.  LIBLINEAR: A Library for Large Linear Classification , 2008, J. Mach. Learn. Res..

[6]  Luc Van Gool,et al.  The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.

[7]  Chih-Jen Lin,et al.  Large Linear Classification When Data Cannot Fit in Memory , 2011, TKDD.

[8]  John R. Smith,et al.  Large-scale concept ontology for multimedia , 2006, IEEE MultiMedia.

[9]  G. Griffin,et al.  Caltech-256 Object Category Dataset , 2007 .

[10]  Pietro Perona,et al.  Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.

[11]  Pietro Perona,et al.  Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.

[12]  Rong Yan,et al.  Model-shared subspace boosting for multi-label classification , 2007, KDD '07.

[13]  Cordelia Schmid,et al.  Toward Category-Level Object Recognition , 2006, Toward Category-Level Object Recognition.

[14]  Ryan M. Rifkin,et al.  In Defense of One-Vs-All Classification , 2004, J. Mach. Learn. Res..

[15]  J. Friedman Stochastic gradient boosting , 2002 .

[16]  John R. Smith,et al.  Semantic representation: search and mining of multimedia content , 2004, KDD '04.

[17]  Leo Breiman,et al.  Bagging Predictors , 1996, Machine Learning.

[18]  Nicolas Pinto,et al.  Why is Real-World Visual Object Recognition Hard? , 2008, PLoS Comput. Biol..

[19]  Fei-Fei Li,et al.  ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.