Path planning for mobile robots based on self-adaptive exploration DDQN
DOI:
CSTR:
Author:
Affiliation:

1.Key Laboratory of Advanced Manufacturing and Automation Technology, Guilin University of Technology, Education Department of Guangxi Zhuang Autonomous Region,Guilin 541006, China; 2.Guangxi Key Laboratory of Special Engineering Equipment and Control, Guilin University of Aerospace Technology,Guilin 541004, China; 3.Key Laboratory of AI and Information Processing, Education Department of Guangxi Zhuang Autonomous Region, Hechi University, Hechi 546300, China; 4.Guilin Mingfu Robot Technology Company Limited,Guilin 541004, China; 5.School of Information Engineering, Nanning College of Technology,Guilin 541006, China

Clc Number:

TP242;TN711

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    To address issues such as the imbalanced allocation of exploration and exploitation, as well as insufficient data utilization in traditional double deep Q-Network algorithms for path planning, an improved DDQN path planning algorithm is proposed. Firstly, the concept of exploration success rate is introduced into the adaptive exploration strategy, dividing the training process into exploration and exploitation phases to allocate exploration and exploitation effectively. Secondly, the double experience pool mixed sampling mechanism partitions and samples experience data based on reward size to maximize the utilization of beneficial data. Finally, a reward function based on artificial potential field is designed to enable the robot to receive more single-step rewards, effectively addressing the issue of sparse rewards. Experimental results show that the proposed algorithm achieves higher reward values, greater success rates, and shorter planning times and steps compared to the traditional DDQN algorithm and the DDQN algorithm based on experience classification and multi-steps, demonstrating superior overall performance.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:
  • Revised:
  • Adopted:
  • Online: January 16,2025
  • Published:
Article QR Code