Unsupervised Video Summarization via Relation-Aware Assignment Learning

We address the problem of unsupervised video summarization that automatically selects key video clips. Most state-of-the-art approaches suffer from two issues: (1) they model video clips without explicitly exploiting their relations, and (2) they learn soft importance scores over all the video clips to generate the summary representation. However, a meaningful video summary should be inferred by taking the relation-aware context of the original video into consideration, and directly selecting a subset of clips with a hard assignment. In this paper, we propose to exploit clip-clip relations to learn relation-aware hard assignments for selecting key clips in an unsupervised manner. First, we consider the clips as graph nodes to construct an assignment-learning graph. Then, we utilize the magnitude of the node features to generate hard assignments as the summary selection. Finally, we optimize the whole framework via a proposed multi-task loss including a reconstruction constraint, and a contrastive constraint. Extensive experimental results on three popular benchmarks demonstrate the favourable performance of our approach.