Highlights

In brief

DeepFuzzy is a deep learning algorithm that uses WiFi signals to help robots navigate indoors.

© Shutterstock

Helping robots feel their way around

5 Aug 2021

A deep learning technique developed by A*STAR researchers uses WiFi data to help robots find their way around.

One of the things that humans do effortlessly but is unexpectedly challenging for robots is navigating new environments. While exploring a new area, we use visual cues to build up a mental map of the location—and our position within it. Similar camera-based systems are available to help robots ‘see,' but they fall short in certain contexts, like when sensors do not have a clear line of sight or when backgrounds are blank and featureless.

Nonetheless, robots have other senses that are unavailable to humans and can be used to map out the environment. Taking advantage of the fact that WiFi is ubiquitous in most indoor spaces, a team led by Le Zhang of A*STAR’s Institute for Infocomm Research (I2R) designed a novel non-visual system that helps robots orient themselves in new environments.

First, a mobile device is used to survey a location where WiFi access points have been installed beforehand. At specified reference spots, the device saves the unique WiFi received signal strength (RSS) from each access point, which is recorded in a map database. Afterward, when a robot or another device sends RSS details from anywhere within the surveyed space, information from the database could be used to estimate its exact location.

What the team needed from here was a technique that could accurately turn RSS information into location coordinates. The algorithms used for these calculations are called fingerprinting-based algorithms; named after the fact that virtual ‘fingerprints’ are taken in an area that serves as a priori knowledge for robot localization or navigation.

“Our model, called DeepFuzzy, is proposed to inherit the merits of decision trees and deep neural networks within an end-to-end trainable architecture,” Zhang said.

In DeepFuzzy, high-level features are first extracted from a sample by a deep network, after which they are routed into decision trees that use fuzzy logic to make predictions. These ‘fuzzy trees’ are better suited to dealing with ambiguous information and give more generalizable results. In performance comparisons against other models, deep learning techniques generally outperformed their counterparts, but DeepFuzzy gave the most accurate estimates of them all.

Zhang said DeepFuzzy can be used in tasks ranging from visual surveillance to image super-resolution. He and his team plan to take what they learned from this project to produce better deep learning techniques with useful applications. “We are always interested in developing effective and efficient deep learning techniques for real-life problems,” he said.

The A*STAR researchers contributing to this study come from the Institute for Infocomm Research (I2R).

Want to stay up to date with breakthroughs from A*STAR? Follow us on Twitter and LinkedIn!

References

Zhang, L., Chen, Z., Cui, W., Li, B., Chen, C., et al. WiFi-based indoor robot positioning using deep fuzzy forests. IEEE Internet of Things Journal, 7 (11) 10773–10781 (2020) | article

About the Researcher

Le Zhang is a deep learning and computer vision scientist based at A*STAR’s Institute for Infocomm Research (I2R). After completing his BEng degree at the University of Electronic Science and Technology of China, Zhang was awarded an MSc and PhD from Nanyang Technological University, Singapore. Zhang currently serves on the organizing committee on several international conferences in the fields of artificial intelligence and computer vision. He is also on the editorial board of computer science publications, including IET Biometrics, Pattern Recognition and Neurocomputing.

This article was made for A*STAR Research by Wildtype Media Group