Dropped into an unknown environment, what should an agent do to quickly learn about the environment and how to accomplish diverse tasks within it?
We address this question within the goal-conditioned reinforcement learning paradigm, by identifying how the agent should set its goals at training time to maximize exploration. We propose "planning exploratory goals" (PEG), a method that sets goals for each training episode to directly optimize an intrinsic exploration reward.
PEG first chooses goal commands such that the agent's goal-conditioned policy, at its current level of training, will end up in states with high exploration potential. It then launches an exploration policy starting at those promising states. To enable this direct optimization, PEG learns world models and adapts sampling-based planning algorithms to "plan goal commands". In challenging simulated robotics environments including a multi-legged ant robot in a maze, and a robot arm on a cluttered tabletop, PEG exploration enables more efficient and effective training of goal-conditioned policies relative to baselines and ablations. Our ant successfully navigates a long maze, and the robot arm successfully builds a stack of three blocks upon command
We evaluate PEG and other goal-conditioned RL agents on four different continuous-control environments ranging from navigation to manipulation.
PEG's superior evaluation performance is attributed to its sophisticated exploration, which enables the agent to learn from more informative data.
Below, we visualize the goals (red dots) and explored states (green dots) chosen by various methods halfway through the training in the Ant Maze. PEG explores the deepest part of the maze, whereas other methods barely reach the middle. A trend across tasks is that PEG consistently picks goals (red points) beyond the frontier of seen data, such as the top left corner of the Ant Maze, driving deep exploration of the maze. Baselines like MEGA pick goals near the frontier, which does result in a few goals in the top left, but we can see the resulting exploration trajectories do not penetrate the top left.