面向密集场景的多目标车辆检测算法
DOI:
作者:
作者单位:

西安石油大学电子工程学院 西安 710065

作者简介:

通讯作者:

中图分类号:

TN919.8;TP391

基金项目:

陕西省科技厅一般项目(2020GY-152)资助


Multi-objective vehicle detection algorithms for dense scenes
Author:
Affiliation:

College of Electronic Engineering, Xi′an Shiyou University,Xi′an 710065, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    目标检测可为自动驾驶车辆提供附近目标的位置、大小和类别,但是密集场景中多目标检测仍然存在漏检、误检问题,为此该文提出了一种AD-YOLOv5车辆检测模型。首先,利用轻量型结构CBAM注意力机制对特征提取网络中的C3模块进行了优化得到C-C3模块,提高了对特征信息的获取能力,降低了对其他特征的关注度;其次,在检测头部分对分类和回归任务进行解耦,以实现更强的特征表达;然后,利用广义幂变换对IoU进行转换操作,提出鲁棒性更好的Alpha-IoU损失函数,提升了模型的检测精度并加快模型的收敛速度;最后,采用GridMask数据增强技术,增加了样本的复杂性,并在处理后的数据集上进行了实验。实验结果表明,改进后的目标检测模型的平均精度均值达到72.72%,与原YOLOv5模型相比提高了2.25%,且模型具有较高的收敛速度,通过可视化对比实验,直观展示了本文模型在密集场景能有效避免误检、漏检现象。

    Abstract:

    Although object detection can provide the location, size and category of nearby targets for autonomous vehicles, there are still problems of missed detection and false detection in multi-object detection in dense scenes, so an AD-YOLOv5 vehicle detection model is proposed. Firstly, the C3 module in the feature extraction network is optimized to obtain the C-C3 module using the lightweight structure CBAM attention mechanism, which improves the ability to acquire feature information and reduces the attention to other features; secondly, in the detection head section, the classification and regression tasks are decoupled in order to achieve stronger feature representation; then, the generalized power transform is used to perform the transformation operation on the IoU, and the Alpha-IoU loss function with better robustness is proposed, which improves the detection accuracy of the model and accelerates the convergence speed of the model; finally, to add to the complexity of the sample, the GridMask data enhancement technique was used and experiments were carried out on the processed dataset. The experimental results show that the mean average accuracy of the improved target detection model reaches 72.72%, which is 2.25% higher than the original YOLOv5 model, and the model has a high convergence speed, and the visual comparison experiments intuitively show that the model of this paper can effectively avoid the phenomenon of misdetection and omission detection in dense scenes.

    参考文献
    相似文献
    引证文献
引用本文

霍爱清,郭岚洁,冯若水.面向密集场景的多目标车辆检测算法[J].电子测量技术,2024,47(9):129-136

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2024-09-04
  • 出版日期: