May the torcher light our way: A negative-accelerated active learning framework for image classification

Uncertainty sampling is one of the most widely used strategy for pool-based active learning, however, there exists the problem that selected images do not reflect the desired training distribution and need additional labeling cost. To deal with this problem, from aspects of image classification and visual perception, we improve the traditional entropy-based sampling strategy by introducing bag-of-visual-words classification method and negative-accelerated learning principle from Rescorla-Wagner perceptive model. Differs from previous researches that treated sampling and classifying process separately, under the unified negative-accelerated learning model, we combine the two processes as a uniform model, named as negative-accelerated uncertainty sampling strategy with BoVW (NUSB) by proposing a new evolving sample selection measure, which takes category distribution into consideration. Classifier is trained to provide category distribution for the sampling process, reducing additional cost of annotation. Also, transfer test is utilized to prevent over-fitting and further evaluate the performance of different sampling strategies. Experimental results on real world datasets show that our active sampling framework outperforms both baseline active sampling strategies and state-of-the-art active learning based image classification method.

[1]  David D. Lewis,et al.  Heterogeneous Uncertainty Sampling for Supervised Learning , 1994, ICML.

[2]  Luc Van Gool,et al.  TriCoS: A Tri-level Class-Discriminative Co-segmentation Method for Image Classification , 2012, ECCV.

[3]  Tat-Seng Chua,et al.  Semantic-Gap-Oriented Active Learning for Multilabel Image Annotation , 2012, IEEE Transactions on Image Processing.

[4]  R. Rescorla,et al.  A theory of Pavlovian conditioning : Variations in the effectiveness of reinforcement and nonreinforcement , 1972 .

[5]  Ricardo da Silva Torres,et al.  Visual word spatial arrangement for image retrieval and classification , 2014, Pattern Recognit..

[6]  Luc Van Gool,et al.  The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.

[7]  G LoweDavid,et al.  Distinctive Image Features from Scale-Invariant Keypoints , 2004 .

[8]  Mark Craven,et al.  An Analysis of Active Learning Strategies for Sequence Labeling Tasks , 2008, EMNLP.

[9]  Xuelong Li,et al.  Biologically Inspired Features for Scene Classification in Video Surveillance , 2011, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).

[10]  Hong Qiao,et al.  Improving invariance in visual classification with biologically inspired mechanism , 2014, Neurocomputing.

[11]  Maozu Guo,et al.  Constructing training distribution by minimizing variance of risk criterion for visual category learning , 2012, 2012 19th IEEE International Conference on Image Processing.

[12]  Pietro Perona,et al.  Entropy-based active learning for object recognition , 2008, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[13]  Chong-Wah Ngo,et al.  Evaluating bag-of-visual-words representations in scene classification , 2007, MIR '07.

[14]  Ling Shao,et al.  Transfer Learning for Visual Categorization: A Survey , 2015, IEEE Transactions on Neural Networks and Learning Systems.

[15]  Silvio Savarese,et al.  Discriminative Object Class Models of Appearance and Shape by Correlatons , 2006, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06).

[16]  Xin Li,et al.  Multi-level Adaptive Active Learning for Scene Classification , 2014, ECCV.

[17]  Pietro Perona,et al.  Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.