Abstract:Lane detection is a core task in autonomous driving perception systems, holding significant application value in complex traffic environments. While existing methods perform well under normal conditions, lane detection still faces challenges such as blurriness, disconnection, and occlusion in adverse scenarios like low light, backlighting, heavy fog, rain, and snow. To improve lane detection performance in these harsh conditions, this paper proposes the α-SimADNet detection network, built upon the ADNet framework. This model performs anchor point extraction and parameter regression using ADNet, while enhancing the backbone network′s feature discrimination and environmental adaptability by introducing negative sample contrastive learning and a mask twin network with an alternating optimization strategy. These enhancements significantly improve the model′s feature representation capabilities in challenging environments, without increasing computational overhead during inference. Additionally, to address the insufficient gradient response from traditional IoU loss in the regression of difficult samples, we introduce the power-adjusted α-GLIoU loss function to improve the model′s ability to fit broken and occluded lane lines. To thoroughly assess the proposed method′s performance, we constructed a high-quality lane detection dataset, HardLane-F100, focused on harsh environments, which includes 106 video segments and 10 600 image frames. This dataset effectively mitigates the current public datasets′ lack of extreme environmental samples. Experimental results show that α-SimADNet achieves an F1@0.5 score of 83.2% on the HardLane-F100 dataset, outperforming mainstream methods ADNet and RVLD by 2.7% and 1.2%, respectively. Under the more stringent F1@0.7 metric, it scores 60.9%, improving by 3.8% and 3.2% compared to ADNet and RVLD, respectively. This method demonstrates superior performance across various challenging scenarios, fully proving its effectiveness in harsh environments.