基于改进YOLOv10n的泊车图像旋转目标检测算法
DOI:
CSTR:
作者:
作者单位:

1.广东财经大学信息学院 广州 510320; 2.广东财经大学统计与数学学院 广州 510320; 3.华南理工大学机械与汽车工程学院 广州 510640

作者简介:

通讯作者:

中图分类号:

TN911.73

基金项目:


Rotating target detection algorithm in parking images based on improved YOLOv10n
Author:
Affiliation:

1.School of Information Science, Guangdong University of Finance and Economics,Guangzhou 510320, China; 2.School of Statistics and Mathematics, Guangdong University of Finance and Economics,Guangzhou 510320, China; 3.School of Mechanical and Automotive Engineering, South China University of Technology,Guangzhou 510640, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    目标检测是无人驾驶时代自动泊车智能感知的关键技术之一。鱼眼相机感知过程存在环境因素复杂、障碍物类型多样、鱼眼镜头下检测对象图像失真等问题,常规算法难以保证自动泊车复杂场景下各类对象的检测精度。为此,本文提出了一种基于改进YOLOv10n的旋转目标检测方法,在主干网络引入SPPELAN模块,并利用DSConv改进C2f中部分卷积融合iRMB模块,以提高鱼眼镜头下的特征提取能力,增强小目标对象的定位能力;然后采用ATFL函数,增强模型在检测目标特征上的聚焦能力。实验结果表明,改进后的算法在鱼眼相机泊车数据集上的mAP@0.5达到了89.89%,mAP@0.5:0.95达到了69.36%,比基准模型分别提高了0.62%和0.6%,为泊车感知技术提供了新的思路。

    Abstract:

    Object detection is one of the key technologies for intelligent perception in autonomous parking systems in the era of autonomous driving. The perception process of fisheye cameras faces several challenges, including complex environmental factors, a diverse range of obstacles, and image distortion of detection targets under fisheye lenses. Conventional algorithms struggle to maintain high detection accuracy for various objects in complex parking scenarios. To address this, this paper proposes a rotation-based object detection method using an improved YOLOv10n model. The approach introduces the SPPELAN module into the backbone network, and utilizes DSConv to enhance the C2f module by improving the convolution fusion of the iRMB. This improves feature extraction capability under fisheye lenses and enhances the localization ability of small objects. Additionally, an ATFL function is employed to strengthen the model’s focus on target features. Experimental results demonstrate that the improved algorithm achieves a mAP@0.5 of 89.89% and a mAP@0.5:0.95 of 69.36% on the fisheye camera parking dataset, outperforming the baseline model by 0.62% and 0.6%, respectively. This provides new insights into the development of parking perception technologies.

    参考文献
    相似文献
    引证文献
引用本文

梁列全,李想,何永华,周璇.基于改进YOLOv10n的泊车图像旋转目标检测算法[J].电子测量技术,2025,48(19):205-216

复制
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2025-12-01
  • 出版日期:
文章二维码