Feature Aggregation Networks Based on Dual Attention Capsules for Visual Object Tracking

Tracking-by-detection algorithms have considerably enhanced tracking performance with the introduction of recent convolutional neural networks (CNNs). However, most trackers directly exploit standard scalar-output CNN features, which may not capture enough feature encoding information, instead of aggregated CNN features of vector-output form. In this paper, we propose an end-to-end feature aggregation capsule framework. First, based on the existing CNN network, we aggregate a certain number of similar position-aware CNN features into a capsule to model the feature similarity. The acquired vector-level feature capsules (rather than previous scalar-level pointwise features) are utilized for differentiation learning. We then propose a group attention module to better model the entity representation between different capsule groups thus optimizes total discriminative capability. Third, to reduce the prediction interference resulted by the side effect of dimension rising within capsules, we propose a penalty attention module. Such strategy could dynamically adjust values of neurons by estimating whether they are beneficial or harmful to tracking. Experimental results on five representative benchmarks (UAVDT, DTB70, UAV123, VOT2016 and VOT2018) demonstrate the excellent tracking performance of our dual attention based capsule tracker (DACapT). Specially, it exceeds the previous top tracker by 4.6%/1.9% in precision/success evaluations on UAVDT.