Abstract:The learning-based motion planning approach uses a data-driven policy trained on large-scale driving experiences and have demonstrated good performance. However, these methods often treat motion planning as a black-box problem, resulting in limited interpretability. They also face challenges such as dataset bias, overfitting, and getting stuck in a local optimum. In this paper, we exploit the powerful inference and interpretation capabilities of emerging large language models to propose a large language model-based motion planning framework for autonomous driving, called LLMs-Driver, to address the problem of poor interpretability in learning-based approaches. LLMs-Driver consists of three parts, namely, the reasoning module, the memory module, and the reflection module. In the reasoning module, we propose the important experience playback algorithm, which integrates the two influencing factors of experience priority and scene similarity, to improve the learning efficiency and performance of the LLMs-Driver. In the memory module, we propose an improved first-in-first-out experience storage algorithm to ensure the validity and novelty of the experience, ensuring that LLMs-Driver continuously learns from the most recent and effective strategies. Meanwhile, in order to fully enhance the transparency and credibility of the self-driving motion planning model, the ‘three-step chain of thought’ method is adopted, which divides the inference and reflection process into three steps, each accompanied by explanatory textual reasoning. Finally, we validate LLMs-Driver through closed-loop autonomous driving experiments on the Highway-env simulation platform. The experimental results show that LLMs-Driver has significant interpretability and motion planning capabilities, with the median number of successful steps on a task increased up to 2.19 times of the baseline algorithm. Additionally, it supports the customization of different driving styles based on the driver′s intention.