基于改进YOLOv5-s的火龙果多任务识别与定位
DOI:
作者:
作者单位:

五邑大学智能制造学部 江门 529000

作者简介:

通讯作者:

中图分类号:

TP391.4

基金项目:

广东省普通高校重点领域专项(新一代信息技术)(2021ZDZX1045)资助


Multitasking recognition and positioning of pitaya based on improved YOLOv5-s
Author:
Affiliation:

Intelligent Manufacturing Department, Wuyi University,Jiangmen 529000, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    在复杂的农业环境下,水果采摘机器人系统感知端的识别与定位性能是提高水果采摘成功率的重要指标。本文以复杂外形的火龙果作为研究对象,针对采摘机器人的视觉系统提出了一种适用于火龙果图像自主检测的实时多任务卷积神经网络——SegYOLOv5。该网络基于YOLOv5s卷积神经网络的主体架构进行适应性改进,通过提取3层加强特征作为改进级联RFBNet语义分割网络层的输入,实现图像检测和语义分割的多任务目标识别检测,有效提升了模型的整体性能。改进的SegYOLOv5网络结构能够适应对边界敏感的图像语义分割农业场景,测试集的平均精度均值和平均交并比分别为9310%和8364%,与YOLOv5s+原始RFBNet和YOLOv5s+BaseNet模型相比,高出了前者123%和274%,高于后者238%和145%。SegYOLOv5平均检测速度达到7194 fps,相比EfficientDetD0提高4079 fps,平均精度均值高出58%。通过端到端输出SegYOLOv5检测结果并结合图像几何矩算子,能够实时准确定位火龙果质心作为理想采摘点。改进的算法具有较高的鲁棒性和通用性,为基于视觉感知的水果采摘机器人奠定了有效的实践基础。

    Abstract:

    The recognition and positioning capabilities of the visual perception terminal of the fruitpicking robot system are crucial indicators to increase fruitpicking success rates in the complicated agricultural environment. A realtime multitask convolutional neural network SegYOLOv5 suited for autonomous Pitaya fruit image detection for the visual system of the picking robot was proposed in this paper using Pitaya fruit with complicated shape as the research object. The network is enhanced based on the primary architecture of YOLOv5′s convolutional neural network. The multitasking target recognition and detection task of image detection and semantic segmentation is realized, and the overall performance of the model is substantially improved, by extracting threelayer enhanced features as the input of the improved cascaded RFBNet semantic segmentation network layer. With a mean Average Precision and mean Intersection Over Union of 9310% and 8364%, respectively, for the testing dataset, the enhanced SegYOLOv5 network architecture can adapt to the boundary-sensitive image semantic segmentation agricultural scene, compared with YOLOv5s+original RFBNet and YOLOv5s+BaseNet models, it is 123% and 274% higher than the former, and 238% and 145% higher than the latter. The average detection speed of SegYOLOv5 can reach 7194 fps which is 4079 fps faster than EfficientDetD0, and the mean Average Precision is 58% higher. The center of mass of Pitaya fruit may be precisely positioned in real time as the best picking position using the endtoend output of SegYOLOv5 detection output and the fusion of image geometric moment operator. The improved algorithm has high robustness and versatility, which lays an effective practical foundation for fruit picking robot based on visual perception.

    参考文献
    相似文献
    引证文献
引用本文

孔凡国,李志豪,仇展明,王鑫.基于改进YOLOv5-s的火龙果多任务识别与定位[J].电子测量技术,2023,46(18):155-162

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2024-01-10
  • 出版日期: