Search and rescue operations are reliant primarily on humans and trained animals. However, these rescuers are often hampered by hazardous conditions such as unstable structures, poor lighting or bad weather. This can lead to a loss of valuable time, with implications for the success of rescue efforts.
One way to overcome such limitations is to use robots instead. In such applications, a robot’s ability to distinguish textures—for instance, between human skin, fabric, concrete and metal—is critical.
Scientists at the Institute of Infocomm Research (I2R), in collaboration with the National University of Singapore and Nanyang Technological University, Singapore, have since developed an approach to equip robots with sensitive touch capabilities. They presented their findings in a paper at the 2019 International Conference on Robotics and Automation.
Yan Wu, a Research Scientist at I2R and senior author on the paper, explained that their system mimics the way humans distinguish different textures by touch. “Touch is performed in a two-stage fashion,” said Wu. In the first stage, contact results in an initial conjecture of some coarse properties of a surface. This is then followed by sliding, during which finer details are sensed through gentle rubbing of a surface. Temporal signals also play an important role in this sensing, he added.
Thereafter, robots must learn to associate certain tactile signals with particular types of surfaces. To facilitate tactile learning, the group applied a combinatorial machine learning approach, relying on both convolutional neural networks (CNN) and long-short-term memory (LSTM) methods.
In essence, when their robot touches a surface, a tactile map loosely equivalent to a camera image is generated, accompanied by time sequence information. The CNN analyzes the ‘image’ data while the LSTM evaluates the temporal data for patterns. Collectively, the hybrid technique allows the robot to classify each tactile signal into one of 23 different textures.
The researchers showed that their CNN-LSTM architecture outperformed prior state-of-the-art machine learning techniques by as much as 10 percent in terms of texture classification accuracy.
Wu’s group is now looking into improving this approach by expanding the tactile dataset to include more material surfaces. “This will allow us to improve the robustness of the architecture and help build an open database for the research community to work on common problems and benchmark their solutions against ours,” he said.
The A*STAR-affiliated researchers contributing to this research are from the Institute for Infocomm Research (I2R).