Deep learning detection methods use in ship detection remains a challenge, owing to the small scale of the objects and interference from complex sea surfaces. In addition, existing ship detection methods rarely verify the robustness of their algorithms on multisensor images. Thus, we propose a new improvement on the “you only look once” version 3 (YOLOv3) framework for ship detection in marine surveillance, based on synthetic aperture radar (SAR) and optical imagery. First, improved choices are obtained for the anchor boxes by using linear scaling based on the k-means++ algorithm. This addresses the difficulty in reflecting the advantages of YOLOv3's multiscale detection, as the anchor boxes of a single detection target type between different detection scales have small differences. Second, we add uncertainty estimators for the positioning of the bounding boxes by introducing a Gaussian parameter for ship detection into the YOLOv3 framework. Finally, four anchor boxes are allocated to each detection scale in the Gaussian-YOLO layer instead of three as in the default YOLOv3 settings, as there are wide disparities in an object's size and direction in remote sensing images with different resolutions. Applying the proposed strategy to ``YOLOv3-spp” and ``YOLOv3-tiny,” the results are enhanced by 2%–3%. Compared with other models, the improved-YOLOv3 has the highest average precision on both the optical (93.56%) and SAR (95.52%) datasets. The improved-YOLOv3 is robust, even in the context of a mixed dataset of SAR and optical images comprising images from different satellites and with different scales.