Abstract:The traditional ORB-SLAM3 system demonstrates excellent performance in static environments, however, the presence of dynamic features introduces unnecessary noise, leading to errors in feature matching and inaccuracies in camera pose estimation. Existing dynamic SLAM algorithms face challenges in comprehensively identifying potential dynamic features, resulting in missed detections or false positives and consequently degrading localization accuracy. To tackle these issues, the semantic segmentation network Deeplabv3+ and the Lucas-Kanade optical flow method are incorporated into the tracking thread of ORB-SLAM3. Specifically, the backbone network of Deeplabv3+ is replaced with Mobilenetv3 to enhance the precision of semantic segmentation. Semantic segmentation is then used to obtain a mask of potential dynamic objects, which is employed to preliminarily filter out dynamic feature points. The remaining feature points undergo LK optical flow calculation, with the average optical flow error serving as a threshold to prevent the insufficient number of static feature points from causing pose estimation failure. In comparison to the original ORB-SLAM3, the improved algorithm in this study achieves an average localization accuracy improvement of 47.92% on the high-dynamic sequences of the TUM dataset. Furthermore, among existing advanced dynamic SLAM algorithms, the proposed method achieved the highest localization accuracy on the Walking_static sequence of the TUM dataset.