1419

5 views
Download
  • Share
Create Account or Sign In to post comments

The state-of-the-art deep reinforcement learning algorithm, i.e., the deep deterministic policy gradient (DDPG), has achieved good performance in continuous control problems for the robotics. However, the conventional experience replay mechanism of the DDPG algorithm stores the experience explored by the mobile robot in the bufer pool, and trains the neural network through random sampling, without considering whether the transition is valuable, which can probably influence the network performance. To overcome the limitation, the DDPG framework with separating experience is developed for mobile robot collision-free navigation in this study, to replay the transitions of valuable and the failed experience discretely. Additionally, environment state vector is designed including mobile robot and obstacles, the reward function and action space are also designed. The simulation results show that the proposed model can possess the collision-free navigation capacity to deal with multiple obstacles.

Toward Obstacle Avoidance for Mobile Robots Using Deep Reinforcement Learning Algorithm Xiaoshan Gao, Liang Yan, Gang Wang, Tiantian Wang, Nannan Du, Chris Gerada

Next Up

00:05:02
00:01:32
00:14:15
00:05:37
00:13:23
00:11:02