Abstract:Road detection in Synthetic Aperture Radar (SAR) images enables precise identification of multi-scale road targets under complex backgrounds, playing a critical role in military and civilian applications such as battlefield surveillance, target localization, and disaster response. Compared to traditional methods relying on edge detection or region segmentation, Convolutional Neural Network (CNN)-based approaches exhibit superior feature extraction and segmentation accuracy. However, existing methods still struggle with multi-scale road detection due to the diverse resolutions and varying receptive fields required for roads of different scales in SAR datasets. To address these challenges, this paper proposes a multi-scale road detection method based on a Dense Dilated Pyramid Network. The method integrates dense connections into a U-Net architecture, replacing traditional fixed-dilation-rate structures with progressive dilation rates to construct a dense dilated pyramid module in the encoder. This design progressively expands the receptive field to adapt to multi-resolution road features. Additionally, a multi-scale attention mechanism dynamically fuses shallow details and deep semantic information while suppressing background interference. Experiments on Gaofen-3 SAR datasets demonstrate that the proposed method achieves mean Intersection over Union values of 74.39%, 68.01% and 66.32% at 1 m, 3 m and 10 m resolutions, respectively, outperforming state-of-the-art methods by 2.04%~13.7%. The method significantly reduces missed detections of small-scale roads and lowers false alarms caused by environmental interference, achieving optimal detection performance across multi-scale scenarios in both single-image and cross-resolution settings.