Researchers from University of California, Berkeley and Siemens designed a robot that learns how to grip new objects just by studying a database of 3D shapes.
Using a GTX 1080 GPU and cuDNN with the TensorFlow deep learning framework, the team generated 6.7 million synthetic point clouds from thousands of 3D models to train their convolutional neural network to recognize robust grasps. Once trained, they evaluated the robot with test objects not included in the training set and it was successful at lifting the object 99% of the time. This is a significant step up from their previous methods (which used analytic and statistical sampling), the researchers say.
The work shows how new approaches to robot learning, combined with the ability for robots to access information through the cloud, could advance the capabilities of robots in factories and warehouses, and might even enable these machines to do useful work in new settings like hospitals and homes.