基于双目相机深度估计的相机-LiDAR端到端外参标定方法
DOI:
CSTR:
作者:
作者单位:

东南大学仪器科学与工程学院南京210096

作者简介:

通讯作者:

中图分类号:

TH74

基金项目:


End-to-end camera-LiDAR extrinsic calibration method based on stereo camera depth estimation
Author:
Affiliation:

School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    准确可靠的传感器外参标定方法是相机-激光雷达融合系统实现高精度定位与导航的关键。然而,现有的端到端相机-激光雷达外参标定方法仍存在诸多不足,如模型参数量大、模态特征相关性计算不匹配等问题。为此,提出了一种基于双目相机估计深度图与激光雷达初始投影深度图的相机-激光雷达外参联合标定方法。该方法首先采用SGBM算法对双目图像进行立体匹配,获得高精度深度估计图;然后将其与激光雷达初始投影深度图一同输入设计的轻量级深度学习网络,进行多模态特征提取,有效缓解模态不一致问题;接着通过相关性匹配层对两种特征进行相关性计算,并且引入两组自注意力机制分别关注旋转外参和平移外参;最后,通过迭代细化的网络训练策略实现高精度的外参估计。在KITTI Odometry数据集上的实验结果表明,所提算法可以分别取得0.67 cm的平均平移误差和0.09°平均角度误差,较当前主流的方法LCCNet分别降低了59.64%和72.73%,并且具有更少的模型参数量。此外,所提算法在实车测试中也展现了精确的端到端标定效果,以所提算法标定结果为初始外参的LVI-SAM算法绝对轨迹均方根误差相较LCCNet降低了5.18%,验证了该方法在标定准确性和工程实用性方面的优势。

    Abstract:

    Accurate and reliable extrinsic calibration of sensors is essential for achieving high-precision localization and navigation in camera-LiDAR fusion systems. However, existing end-to-end camera-LiDAR calibration methods suffer from various limitations, such as large model parameter sizes and mismatched cross-modal feature correlation computation. To address these issues, this article proposes a novel joint calibration method based on stereo camera-estimated depth maps and initial LiDAR-projected depth maps. Specifically, the SGBM algorithm is used to perform stereo matching and generate high-accuracy depth estimation maps. These maps, along with the initial LiDAR depth projections, are fed into a lightweight deep neural network designed for multi-modal feature fusion, effectively mitigating modality inconsistency. A correlation matching layer is then utilized to compute feature-level correspondences, and two separate self-attention mechanisms are introduced to independently model rotational and translational extrinsic. Finally, an iterative refinement training strategy is adopted to enhance calibration accuracy. Compared with the state-of-the-art method LCCNet, experimental results on the KITTI Odometry dataset show that the proposed method achieves an average translation error of 0.67 cm and an average rotation error of 0.09°, representing reductions of 59.64% and 72.73%, respectively. And it requires fewer model parameters. In addition, real-world vehicle tests further demonstrate the effectiveness of the proposed method. When used as the initial extrinsic calibration in the LVI-SAM system, the absolute trajectory root mean square error is reduced by 5.18% compared with LCCNet, validating the accuracy and practical applicability of the method.

    参考文献
    相似文献
    引证文献
引用本文

刘秋骅,徐晓苏.基于双目相机深度估计的相机-LiDAR端到端外参标定方法[J].仪器仪表学报,2025,46(5):214-225

复制
分享
相关视频

文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2025-08-12
  • 出版日期:
文章二维码