Contextual superpixel description for remote sensing image classification

The performance of pattern classifiers depends on the separability of the classes in the feature space - a property related to the quality of the descriptors - and the choice of informative training samples for user labeling - a procedure that usually requires active learning. This work is devoted to improve the quality of the descriptors when samples are superpixels from remote sensing images. We introduce a new scheme for superpixel description based on Bag of visual Words, which includes information from adjacent superpixels, and validate it by using two remote sensing images and several region descriptors as baselines.