Abstract:Addressing the issues of single-dimensional environmental information perception, inefficient dense map creation, and insufficient environmental detail information in sparse maps during visual perception of external environmental changes for legged robots in outdoor environments, this paper proposes a machine vision-based multi-dimensional environmental mapping method for legged robots that achieves global large-scale and local high-resolution capabilities. The method employs a visual SLAM algorithm combined with RGB images and depth information to achieve camera pose estimation and generate environmental point clouds. Furthermore, by employing the improved voxel filtering to reduce point cloud density and utilizing ray projection to create virtual points, the method realizes global large-scale and local high-resolution environmental geometric dimension map creation. Based on this foundation, targeting the requirements for environmental physical dimension information perception of outdoor legged robots, the method implements high-precision semantic segmentation of outdoor terrain environments through an improved SegNet network. It further utilizes terrain optical characteristics and surface structural features to establish a mapping from terrain semantics to terrain physical layer attribute parameters through a decision model, thereby achieving the creation of terrain physical dimension maps. Finally, through the fusion of terrain geometric dimension maps and physical dimension maps, the creation of multi-dimensional environmental map for outdoor legged robots is accomplished. The rationality and effectiveness of the proposed mapping method are validated through physical platform mapping experiments. The experimental results demonstrate that the proposed multi-dimensional environmental mapping method exhibits significant advantages over traditional mapping methods in terms of mapping performance, environmental key information extraction, and perception dimensions. It is more suitable for improving legged robots′ comprehensive non-contact understanding of environmental information during outdoor movement, thereby enhancing the environmental adaptability of legged robots in outdoor environments.