1213

8 views
Download
  • Share
Create Account or Sign In to post comments

Autonomous car-like mobile robots such as delivery robots have now become a hot research topic in the field of robotics. Accurate prediction crossing intention of pedestrians is important to improve robot delivery efficiency and protect pedestrian safety. However, most current algorithms only consider the features of the key points of human skeleton, and the prediction effectiveness needs to be improved. In this paper, we proposed an algorithm that can predict the intent prediction of pedestrians via integration of facial expression and human 2D skeleton for autonomous car-like mobile robots. Firstly, we collected videos of pedestrians passing on the road as dataset. Then we extracted the key points of the human skeleton and facial expression of pedestrians at different moments based on the videos, and then combined them to build a dataset for predicting pedestrian intention to pass. We constructed a neural network by integrating Graph Convolutional Network (GCN) and Long Short-Term Memory (LSTM), and trained and validated it with the dataset. The network can extract more detailed spatial features and timing features than the traditional method only based on LSTM. The presented method is experimentally proven to obtain an accuracy rate close to 80% with only 200 sets of data.

Intent Prediction of Pedestrians via Integration of Facial Expression and Human 2D Skeleton for Autonomous Car-like Mobile Robots   Xuefeng Zhu, Wenqian Fu, Xiaojun Xu

Next Up

00:11:15
00:15:56
00:11:38
00:11:18
00:06:46
00:07:23