基于深度约束与光流跟踪的视觉SLAM方法
DOI:
CSTR:
作者:
作者单位:

中国矿业大学环境与测绘学院 徐州 221116

作者简介:

通讯作者:

中图分类号:

TP391.9;TN98

基金项目:

国家自然科学基金(42274048)、江苏省重点研发计划项目(BE2022716)、中国矿业大学研究生创新计划项目(2025WLJCRCZL219)资助


Visual SLAM approach based on depth constraints and optical flow tracking
Author:
Affiliation:

School of Environment and Spatial Informatics, China University of Mining and Technology,Xuzhou 221116,China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    同步定位与建图(SLAM)是机器人自主导航的关键,然而传统的SLAM系统通常是针对静态环境设计的,当存在动态物体时,动态特征点会造成错误的数据关联从而导致精度和可靠性降低。并且当前的解决方案中依旧存在潜在动态对象无法检测,且动态对象占据主体时所保留的有用特征点不足等问题。为了克服这些限制,提出了一种基于ORB-SLAM2的视觉SLAM系统。首先利用yolov8目标检测提供语义信息,结合深度信息进行深度约束生成动态掩码;然后,基于动态概率进行特征点的四叉树均匀分配,剔除动态特征点的同时保留更多有用特征;最后,利用光流跟踪检测与剔除潜在动态对象上的特征点。其中动态掩码与关键帧结合实现运动分割,构建干净的密集点云地图。在TUM和Bonn数据集下的实验结果表明,相比于ORB-SLAM2,在高度动态场景中平均定位精度提高超过90%,并且在相对静止的场景中表现依旧可靠。此外,在保持实时运行的同时其性能对于当前同类别的先进方法也能有所提升。

    Abstract:

    Simultaneous localization and mapping (SLAM) is the key to autonomous robot navigation. However, traditional SLAM systems are typically designed for static environments, when dynamic objects are present, dynamic feature points can lead to incorrect data associations, reducing accuracy and reliability. Existing solutions still face challenges such as undetected potentially dynamic objects and an insufficient number of useful feature points when dynamic objects dominate the scene. To overcome these limitations, this study proposes a vision SLAM system based on ORB-SLAM2. Firstly, yolov8 object detection is utilized to provide semantic information, which is combined with depth information for depth constraints to generate dynamic masks; next, a quadtree-based uniform allocation of feature points is implemented based on dynamic probability, ensuring the removal of dynamic feature points while preserving more useful features; finally, optical flow tracking is utilized to detect and reject feature points on potentially dynamic objects. In which the dynamic mask is combined with keyframes to realize motion segmentation, thus constructing clean and dense point cloud maps. Experimental results on the TUM and Bonn datasets demonstrate that, compared to ORB-SLAM2, the average localization accuracy improves by over 90% in highly dynamic scenes while maintaining reliable performance in relatively static environments. Additionally, the improved system achieves real-time performance and outperforms other state-of-the-art methods in its category.

    参考文献
    相似文献
    引证文献
引用本文

尹显波,王中元.基于深度约束与光流跟踪的视觉SLAM方法[J].电子测量技术,2025,48(16):122-131

复制
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2025-11-04
  • 出版日期:
文章二维码

重要通知公告

①《电子测量技术》期刊收款账户变更公告