Regularization on Augmented Data to Diversify Sparse Representation for Robust Image Classification

Image classification is a fundamental component in modern computer vision systems, where sparse representation-based classification has drawn a lot of attention due to its robustness. However, on the optimization of sparse learning systems, regularization and data augmentation are both powerful, but currently isolated. We believe that regularization and data augmentation can cooperate to generate a breakthrough in robust image classification. In this article, we propose a novel framework, regularization on augmented data (READ), which creates diversification in the data using the generic augmentation techniques to implement robust sparse representation-based image classification. When the training data are augmented, READ applies a distinct regularizer, l₁ or l₂, in particular, on the augmented training data apart from the original data, so that regularization and data augmentation are utilized and enhanced synchronously. We introduce an elaborate theoretical analysis on how to optimize the sparse representation by both l₁-norm and l₂-norm with the generic data augmentation and demonstrate its performance in extensive experiments. The results obtained on several facial and object datasets show that READ outperforms many state-of-the-art methods when using deep features.