Object detection in remote sensing images has been widely used in military and civilian fields and is a challenging task due to the complex background, large-scale variation, and dense arrangement in arbitrary orientations of objects. In addition, existing object detection methods rely on the increasingly deeper network, which increases a lot of computational overhead and parameters, and is unfavorable to deployment on the edge devices. In this paper, we proposed a lightweight keypoint-based oriented object detector for remote sensing images. First, we propose a semantic transfer block (STB) when merging shallow and deep features, which reduces noise and restores the semantic information. Then, the proposed adaptive Gaussian kernel (AGK) is adapted to objects of different scales, and further improves detection performance. Finally, we propose the distillation loss associated with object detection to obtain a lightweight student network. Experiments on the HRSC2016 and UCAS-AOD datasets show that the proposed method adapts to different scale objects, obtains accurate bounding boxes, and reduces the influence of complex backgrounds. The comparison with mainstream methods proves that our method has comparable performance under lightweight.
[1]
Eunhyeok Park,et al.
Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications
,
2015,
ICLR.
[2]
Wei Liu,et al.
SSD: Single Shot MultiBox Detector
,
2015,
ECCV.
[3]
Geoffrey E. Hinton,et al.
Distilling the Knowledge in a Neural Network
,
2015,
ArXiv.
[4]
Pietro Perona,et al.
Microsoft COCO: Common Objects in Context
,
2014,
ECCV.
[5]
Wei Li,et al.
R2CNN: Rotational Region CNN for Orientation Robust Scene Text Detection
,
2017,
ArXiv.
[6]
Xingyi Zhou,et al.
Objects as Points
,
2019,
ArXiv.
[7]
Yeongjae Cheon,et al.
PVANET: Deep but Lightweight Neural Networks for Real-time Object Detection
,
2016,
ArXiv.