Highlights

In brief

Breaking complex actions down into their basic components helps robots learn through observing human demonstrators.

© A*STAR’s Institute of Infocomm Research (I2R)

Teaching robots by example

26 Mar 2021

By breaking complex actions into their basic components, researchers have developed a versatile framework that enables robots to learn from human demonstrators.

Young children mimic everything they see and hear around them—sometimes to comic effect. While learning from observed behaviours comes naturally to children, it’s a different story for robots. For a robot to learn to perform a task, one effective way is to define the geometric null space—the set of poses needed for the skill—and its constraints. Together, these form a mathematical representation of the skill that can be performed by any robot in any environment.

Take the simple example of grasping a bottle, said Yan Wu, Assistant Head of the Robotics and Autonomous Systems Department at A*STAR’s Institute for Infocomm Research (I2R). “The hand pose is constrained to be at a certain distance and orientation with respect to the bottle. The geometric null space of this task is therefore a sort of cylinder with a radius and height depending on the dimensions of the grasped object,” he said.

Current approaches rely on expert, handcrafted constraints, which are inefficient and laborious to create. Instead, Wu and his collaborators Caixia Cai from I2R, Ying Siu Liang from A*STAR’s Institute for High Performance Computing (IHPC) and Nikhil Somani from A*STAR’s Advanced Remanufacturing and Technology Centre (ARTC) used human demonstrations. From these demonstrations, they developed a framework to teach robots the geometric null space and its constraints for six basic skills: grasp, place, move, pull, mix and pour.

While skills like grasping a cup are in themselves discrete actions, Wu and his team found that others had to be broken down into basic components. “For example when moving an object, demonstrating the entire pick and place action did not result in a usable geometric null space,” Wu said. “But intuitively, if we segment it into pick, move and place skills, then the null spaces are apparent.”

© A*STAR Research

After identifying the basic skills that needed to be taught, the researchers collected position and orientation information from recorded human demonstrations, obtained a set of data points representative of the geometric null space for each skill, and estimated their parameters. The geometric constraints could then be inferred from the null space.

The researchers proved the effectiveness of their framework by successfully executing the six basic skills using a simple industrial robot and the open-source iCub humanoid robot. The same framework can be adapted to allow other types of robots to learn even basic skills that were not tested in the study—like twisting a lid—by simply tweaking the parameters, said Wu.

The researchers now plan to adapt their framework to learn more complex skills and incorporate deep learning methods throughout their pipeline.

The A*STAR-affiliated researchers contributing to this research are from the Institute for Infocomm Research (I2R), the Institute for High Performance Computing (IHPC) and the Advanced Remanufacturing and Technology Centre (ARTC).

Want to stay up to date with breakthroughs from A*STAR? Follow us on Twitter and LinkedIn!

References

Cai, C., Liang, Y.S., Somani, N., Wu, Y. Inferring the Geometric Nullspace of Robotic Skills from Human Demonstrations. 2020 IEEE International Conference on Robotics and Automation (ICRA), 7668–7675 (2020) | article

About the Researcher

View articles

Yan Wu

Assistant Head of Robotics and Autonomous Systems Department

Institute for Infocomm Research
Yan Wu received his PhD degree in electrical engineering from Imperial College London in 2013. Between August 2012 and November 2013, he worked at the Institute of Child Health, University College London, UK, as an Honorary Research Associate and the NHS Great Ormond Street Hospital for Children as a Research Fellow. He joined A*STAR’s Institute for Infocomm Research (I2R) in December 2013, where he is currently a lead scientist investigating on robot dexterity. He is also the Assistant Head of Robotics and Autonomous Systems Department at I2R. A Senior Member of the IEEE and a Neuro-robotics Systems Technical Committee Member of the IEEE Robotics and Automation Society, Wu is interested in assistive robotics and technologies, multimodal robot learning and human-robot interaction.

This article was made for A*STAR Research by Wildtype Media Group