Siamese Guided Anchoring Network for Visual Tracking*
暂无分享,去创建一个
Recently, the Siamese Region Proposal Network (SiamRPN) has been widely explored in tracking and achieved remarkable performance. However, the existing SiamRPN-based method uses a predefined and highly dependent on prior knowledge anchor, which limits the tracking accuracy. Besides, when the target changes drastically, the anchor box obtained by the SiamRPN-based method also has some negative samples, which leads to a decrease inaccuracy. To address these issues, this paper proposes a siamese guided anchoring network for visual tracking, which can obtain more representative anchors by estimating the position and shape of the target, reducing the adverse effects of negative samples. At the same time, a feature adaption module is proposed to adapt to the target scale change for learning more discriminative and useful features and achieving more accurate visual tracking. Extensive experiments on challenging OTB100 and VOT2018 datasets demonstrate the competitive performance of the proposed algorithm in comparison with the state-of-the-art trackers.