MEXICO: Researchers say they have programmed robots to learn mechanical tasks on their own through trial and error, in a process inspired by the way humans learn.
Computer engineers at the University of California, Berkeley demonstrated their technique, which they described as a type of “reinforcement learning,” by having a robot complete various tasks. These included putting a clothes hanger on a rack, assembling a toy plane and screwing a cap on a water bottle.
It’s “a new approach to empowering a robot to learn,” said Berkeley researcher Pieter Abbeel. “The key is that when a robot is faced with something new, we won’t have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it.”
Abbeel and colleagues plan to present the work on Thursday, May 28, in Seattle at the International Conference on Robotics and Automation.
“We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch,” Abbeel said.
“Most robotic applications are in controlled environments where objects are in predictable positions,” added study collaborator Trevor Darrell. “The challenge of putting robots into real-life settings, like homes or offices, is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings.”
The researchers turned to a new branch of artificial intelligence known as deep learning, which is loosely inspired by the cellular circuitry of the human brain.
“For all our versatility, humans are not born with a repertoire of behaviors that can be deployed like a Swiss army knife, and we do not need to be programmed,” said postdoctoral researcher Sergey Levine, another collaborator in the project.