At GTC, NVIDIA researchers introduced a robotics framework that combines model-based control and reinforcement learning to adaptively change contact sequences in real time.
The system has the potential to help delivery robots and other autonomous machines function more effectively in environments and terrains the robot is not familiar with.
The controller adapts to environmental changes on the fly, including scenarios not seen during training.
“The system consists of a high-level controller that learns to choose from a set of primitives in response to changes in the environment and a low-level controller that utilizes an established control method to robustly execute the primitives,” the researchers stated in their paper Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion.
The controller is up to 85% more energy-efficient and is more robust compared to baseline methods, the researchers explained.
Using an NVIDIA DGX system, composed of multiple NVIDIA GPUs, the researchers trained their GPU-accelerated model in simulation using a treadmill. During training, two of the treadmill’s belts adjust speed independently, allowing the robot to rotate and interact with the different environments.
The researchers show that their model can easily transfer to a real-life robot without sophisticated randomization or adaption schemes.
Learn more about the work here.