FSoD-Net: Full-Scale Object Detection From Optical Remote Sensing Imagery

Object detection is an essential task in computer vision. Recently, several convolution neural network (CNN)-based detectors have achieved a great success in natural scenes. However, for optical remote sensing images with a large scale of view, lower proportion of foreground target pixels and drastic differences in object scale present considerable challenges. To address these problems, we propose a novel one-stage detector called the full-scale object detection network (FSoD-Net) which consists of proposed multiscale enhancement network (MSE-Net) backbone cascaded with scale-invariant regression layers (SIRLs). First, MSE-Net provides the multiscale description enhancement by integrated the Laplace kernel with fewer parallel multiscale convolution layers. Second, SIRLs contain three different isolated regression branch layers (i.e., corresponding to small, medium, and large scales), which make default discrete scale bounding boxes (bboxes) cover full-scale object information in regression procedure. A novel specific scale joint loss is also designed that uses the softmax function combined with a strong $L_{1}$ -norm constraint in each regression branch layer. It can further speed up the convergence and improve the classification scores of predicted bboxes. Finally, extensive experiments are carried on challenge data sets of large-scale dataset for object detection in aerial images (DOTA) and object detection in optical remote sensing images (DIOR) which contain multiple instances from different imaging platforms, and these results demonstrate that FSoD-Net can achieve better performance than other state-of-the-art one-stage detectors, and it can reach a mean average precision (mAP) of 75.33% on DOTA and 71.80% mAP on DIOR, respectively. Especially, the average precision (AP) of tiny object detection can improve 10%–20% approximately.