Learning vector fields through persistent exploration for robot motion control
Keywords: Eikonal equation, Motion planning, Obstacle avoidance, Optimization, Reinforcement learning, Vector fields
AbstractVector field is a well-established technique for performing mobile robot navigation. However, in environments with obstacles, vector fields may have their performance compromised, inducing trajectories that lead the robot to be trapped in a certain region. Therefore, this article presents a new approach, based on reinforcement learning, for interactive learning of vector fields. As a result, this approach provides a vector field: (i) free of spurious equilibria, and (ii) optimal concerning the length of the path traveled. Due to an appropriate initialization of the vector field, the approach can be used to solve tasks in environments with few obstacles even without having advanced learning. Simulations are implemented to validate the proposed methodology.