Researchers from the University of Carnegie – Mellon and Google introduced a new approach to training robots. In this approach, not only the robot that learns the main task is used, but also the opponent who prevents it, which forces the robot to improve. The work was presented at the ICRA 2017 conference and is available on the arXiv.org website, the IEEE Spectrum website writes briefly about it .
To teach the robot some actions recently, machine learning is often used. Engineers and researchers of Google are not the first year engaged in training robots for the capture of objects and various other actions performed with the help of robots. For example, in 2016 they taught the robot to adjust its movements when capturing objects with the help of a neural network, and later in the same year a similar system was taught to open doors. In the second work, several robots were used, simultaneously executing a similar task and sending data on its execution to the server, which gradually improved the neural network. Thus, due to the parallel accumulation of experience, robots were trained several times faster.
In the new work, the research team decided to try a different approach. They evaluated the success of the capture not only by raising the subject, but also by testing how tightly the robot holds it. To do this, the researchers added two new actions to the system. First, after a seemingly successful capture, the robot shook the object to check how securely it is secured. But the main change was that a rival was added to the system. The robot consisted of two manipulators, one of which was engaged in the capture of various items, such as household appliances and toys. Engineers decided to make a second hand rival, who tried to snatch the object from the first hand. And, like the main, exciting hand, the rival hand was also connected to the self-learning neural network.
When the rival hand selected the object from the exciting one, both systems received experience: one of them is positive and the other is negative. Thus, the researchers reproduced in their robot classic confrontation between the shield and the sword, which ultimately significantly increased the efficiency of both systems: after training with rivalry, the share of successful seizures increased to 82 percent, compared with 68 percent of successful seizures without rivalry.
At the ICRA 2017 conference, other machine-learning developments were also presented, for example, researchers from the Massachusetts Institute of Technology developed a system that allows the transfer of skills between robots of different designs.