Abstract:Object detection is one of the key technologies for intelligent perception in autonomous parking systems in the era of autonomous driving. The perception process of fisheye cameras faces several challenges, including complex environmental factors, a diverse range of obstacles, and image distortion of detection targets under fisheye lenses. Conventional algorithms struggle to maintain high detection accuracy for various objects in complex parking scenarios. To address this, this paper proposes a rotation-based object detection method using an improved YOLOv10n model. The approach introduces the SPPELAN module into the backbone network, and utilizes DSConv to enhance the C2f module by improving the convolution fusion of the iRMB. This improves feature extraction capability under fisheye lenses and enhances the localization ability of small objects. Additionally, an ATFL function is employed to strengthen the model’s focus on target features. Experimental results demonstrate that the improved algorithm achieves a mAP@0.5 of 89.89% and a mAP@0.5:0.95 of 69.36% on the fisheye camera parking dataset, outperforming the baseline model by 0.62% and 0.6%, respectively. This provides new insights into the development of parking perception technologies.