Learn to Navigate in Dynamic Environments with Normalized LiDAR Scans

accepted by ICRA2024, Paper video.

Image credit: Unsplash

Firstly, we designed a simulator that is equipped with a LiDAR sensor, which significantly accelerates running during DRL training. Secondly, to simulate real-world scenarios, we assume that dynamic humans have a fixed circle shape, and static obstacles are rectangles with variable sizes. However, since the contours of real-world obstacles are notably different from the shapes of simulated objects, we normalized the collision margins of real-world obstacles as circles with a fixed radius or rectangles with variable sizes. We accomplished this by employing clustering algorithms to localize and frame moving humans or static obstacles, and obtain their centroids and circumscribed cuboids in 3D space. We can then normalize obstacles as circles or rectangles on a 2D plane. Subsequently, we can re-generate 2D LiDAR scans according to the normalized obstacles. Thirdly, instead of using tens of continuous LiDAR scans or high-dimensional depth images which require colossal networks to decode, we leverage long-short term memory (LSTM) to process ego-centric sequential LiDAR scans, which reduces the consumption of hardware resources and enables deployment on small mobile robots.

Wei Zhu
Wei Zhu
Postdoc

My research interests include deep reinforcement learning, snake robot, wheeled bipedal robot, robotic arm, quadruped robot, and autonomous navigation.