Abstract
Due to the advantages of all-weather capability and high resolution, synthetic aperture radar (SAR) image ship detection has been widely applied in the military, civilian, and other domains. However, SAR-based ship detection suffers from limitations such as strong scattering of targets, multiple scales, and background interference, leading to low detection accuracy. To address these limitations, this paper presents a novel SAR ship detection method, NAS-YOLOX, which leverages the efficient feature fusion of the neural architecture search feature pyramid network (NAS-FPN) and the effective feature extraction of the multi-scale attention mechanism. Specifically, NAS-FPN replaces the PAFPN in the baseline YOLOX, greatly enhances the fusion performance of the model’s multi-scale feature information, and a dilated convolution feature enhancement module (DFEM) is designed and integrated into the backbone network to improve the network’s receptive field and target information extraction capabilities. Furthermore, a multi-scale channel-spatial attention (MCSA) mechanism is conceptualised to enhance focus on target regions, improve small-scale target detection, and adapt to multi-scale targets. Additionally, extensive experiments conducted on benchmark datasets, HRSID and SSDD, demonstrate that NAS-YOLOX achieves comparable or superior performance compared to other state-of-the-art ship detection models and reaches best accuracies of 91.1% and 97.2% on AP0.5, respectively.