Investigating the Impact of the Training Set Size on Deep Learning-Powered Hyperspectral Unmixing

Hyperspectral unmixing allows us to estimate the endmember abundances in each pixel of an input hyperspectral image. Although there exist deep learning-powered end-to-end methods for this task, the lack of labeled ground-truth data is a challenging problem which makes the adoption of such techniques extremely difficult in emerging practical use cases where the ground truth is costly to capture. In this paper, we investigate and quantify the impact of the training set size on the quality of unmixing provided by deep learning models of conceptually different architectures.