Young children mimic everything they see and hear around them—sometimes to comic effect. While learning from observed behaviours comes naturally to children, it’s a different story for robots. For a robot to learn to perform a task, one effective way is to define the geometric null space—the set of poses needed for the skill—and its constraints. Together, these form a mathematical representation of the skill that can be performed by any robot in any environment.
Take the simple example of grasping a bottle, said Yan Wu, Assistant Head of the Robotics and Autonomous Systems Department at A*STAR’s Institute for Infocomm Research (I2R). “The hand pose is constrained to be at a certain distance and orientation with respect to the bottle. The geometric null space of this task is therefore a sort of cylinder with a radius and height depending on the dimensions of the grasped object,” he said.
Current approaches rely on expert, handcrafted constraints, which are inefficient and laborious to create. Instead, Wu and his collaborators Caixia Cai from I2R, Ying Siu Liang from A*STAR’s Institute for High Performance Computing (IHPC) and Nikhil Somani from A*STAR’s Advanced Remanufacturing and Technology Centre (ARTC) used human demonstrations. From these demonstrations, they developed a framework to teach robots the geometric null space and its constraints for six basic skills: grasp, place, move, pull, mix and pour.
While skills like grasping a cup are in themselves discrete actions, Wu and his team found that others had to be broken down into basic components. “For example when moving an object, demonstrating the entire pick and place action did not result in a usable geometric null space,” Wu said. “But intuitively, if we segment it into pick, move and place skills, then the null spaces are apparent.”

© A*STAR Research
After identifying the basic skills that needed to be taught, the researchers collected position and orientation information from recorded human demonstrations, obtained a set of data points representative of the geometric null space for each skill, and estimated their parameters. The geometric constraints could then be inferred from the null space.
The researchers proved the effectiveness of their framework by successfully executing the six basic skills using a simple industrial robot and the open-source iCub humanoid robot. The same framework can be adapted to allow other types of robots to learn even basic skills that were not tested in the study—like twisting a lid—by simply tweaking the parameters, said Wu.
The researchers now plan to adapt their framework to learn more complex skills and incorporate deep learning methods throughout their pipeline.
The A*STAR-affiliated researchers contributing to this research are from the Institute for Infocomm Research (I2R), the Institute for High Performance Computing (IHPC) and the Advanced Remanufacturing and Technology Centre (ARTC).