Virtual Demonstrations and Crowdsourcing Could Lead Advancements in Autonomous Robots
University of Maryland (UMD) Assistant Research Professor Krishnanand N. Kaipa, graduate student Joshua Langsfeld and Professor Satyandra K. Gupta have published an article, "Robot See, Robot Do," in the American Society for Mechanical Engineer's magazine Mechanical Engineering discussing new approaches to programming autonomous robots through imitation learning, virtual demonstrations and even crowdsourcing.
From unmanned aerial vehicles and assembly lines to medical surgery and the classroom, society is incorporating robots into more and more everyday roles that require them to perform greater levels of complex tasks; however, current approaches in manual programming limit the ability to successfully program these robots for complex jobs.
But what if you could 'show' the robot how to do the job? What if the robot could even learn from failure, and develop more efficient processes on its own? That's exactly what Kaipa and his co-authors set out to demonstrate through advancing techniques in what is called 'imitation learning'—where robots are shown demonstrations to help them learn a given task.
Through examples from unmanned helicopters to robots performing general manipulation tasks (think setting the table), the authors highlight how robots can acquire complex tasks through 'watching' human driven demonstrations. This approach does have its limitations. The robots are only able to utilize a small number of successful human demonstrations or there are too many subtle differences between the robot and the human demonstrations which result in a greater degree of error.
To combat these shortcomings in imitation learning from physical demonstrations, the UMD researchers are developing imitation learning algorithms that would help the robots learn from not just the successful demonstrations, but from the failures as well. Just like a human learner, the robot would gain experience and improve their performance with repeated tasks. Their current project involves training a robot to pour liquid into a container on a revolving platform—mimicking an assembly-line like task—and the robot learns from its own trial and error how to succeed.
Researchers are not stopping there though. As tasks increase in complexity, the ability to provide enough accurate demonstrations for imitation learning to be reliable becomes costly in both time and money. According to the authors, turning to the virtual world and crowdsourcing is becoming an effective alternative.
These virtual demonstrations rely on 'physics-based' robot simulators and through online games and challenges, enable thousands of individuals to interact and collaborate with virtual robotic counterparts. These interactions help develop new robot behaviors that might not be achieved in a physical setting, because there is a greater diversity of demonstrations and there is a greater chance of encountering more novel and creative ways of performing a task.
According to the authors, "We expect demonstrators recruited via crowd-sourcing to be non-experts and therefore to fail, but robots can still learn from those failures, just as humans do."