When it comes to learning from their past mistakes, which is how humans learn, robots typically fall short. And because the amount of programming involved in spelling out every single aspect of every single task is humongous, this has become a major hurdle in the development of viable artificial intelligence (AI) systems.
“The challenge of putting robots into real-life settings, like homes or offices, is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings,” Trevor Darrell, a faculty member at the University of California, Berkeley, and director of the Berkeley Vision and Learning Center, said, in a statement.
In order to get around this problem, and to develop robots capable of performing actions without pre-programmed details about its surroundings, researchers at the university turned to a new branch of AI -- inspired by the neural circuitry of the human brain -- known as “deep learning.” This is the kind of algorithm that was used to create a robot the researchers nicknamed BRETT (Berkeley Robot for the Elimination of Tedious Tasks).
“What we’re reporting on here is a new approach to empowering a robot to learn,” Pieter Abbeel, a professor at the university’s electrical engineering and computer science department, said, in a statement. “The key is that when a robot is faced with something new, we won’t have to reprogram it. The exact same software, which encodes how the robot can learn, was used to allow the robot to learn all the different tasks we gave it.”
Deep-learning algorithms work through artificial “neural nets,” which process raw sensory data and then interpret it, looking for patterns and instructions. This process is similar to what humans employ. When humans are born, they are not pre-programmed to perform the vast variety of actions that they later carry out in their lives.
“Instead, we learn new skills over the course of our life from experience and from other humans,” Sergey Levine, a postdoctoral researcher at the university, said, in the statement. “This learning process is so deeply rooted in our nervous system, that we cannot even communicate to another person precisely how the resulting skill should be executed. We can at best hope to offer pointers and guidance as they learn it on their own.”
Similarly, BRETT’s software, which includes an algorithm with a reward function for every task successfully performed, takes in the scene and learns which movements are better for the task at hand. However, the robot still takes about 10 minutes to learn a task even when told exactly where it needs to start and stop, and three hours if it has to learn those positions itself.
Eventually, though, the researchers hope that further improvements in technology would ensure that AI is capable enough to adapt to the surroundings and perform a vast variety of operations.
“We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch,” Abbeel said, in the statement.