One of the things that humans do effortlessly but is unexpectedly challenging for robots is navigating new environments. While exploring a new area, we use visual cues to build up a mental map of the location—and our position within it. Similar camera-based systems are available to help robots ‘see,' but they fall short in certain contexts, like when sensors do not have a clear line of sight or when backgrounds are blank and featureless.
Nonetheless, robots have other senses that are unavailable to humans and can be used to map out the environment. Taking advantage of the fact that WiFi is ubiquitous in most indoor spaces, a team led by Le Zhang of A*STAR’s Institute for Infocomm Research (I2R) designed a novel non-visual system that helps robots orient themselves in new environments.
First, a mobile device is used to survey a location where WiFi access points have been installed beforehand. At specified reference spots, the device saves the unique WiFi received signal strength (RSS) from each access point, which is recorded in a map database. Afterward, when a robot or another device sends RSS details from anywhere within the surveyed space, information from the database could be used to estimate its exact location.
What the team needed from here was a technique that could accurately turn RSS information into location coordinates. The algorithms used for these calculations are called fingerprinting-based algorithms; named after the fact that virtual ‘fingerprints’ are taken in an area that serves as a priori knowledge for robot localization or navigation.
“Our model, called DeepFuzzy, is proposed to inherit the merits of decision trees and deep neural networks within an end-to-end trainable architecture,” Zhang said.
In DeepFuzzy, high-level features are first extracted from a sample by a deep network, after which they are routed into decision trees that use fuzzy logic to make predictions. These ‘fuzzy trees’ are better suited to dealing with ambiguous information and give more generalizable results. In performance comparisons against other models, deep learning techniques generally outperformed their counterparts, but DeepFuzzy gave the most accurate estimates of them all.
Zhang said DeepFuzzy can be used in tasks ranging from visual surveillance to image super-resolution. He and his team plan to take what they learned from this project to produce better deep learning techniques with useful applications. “We are always interested in developing effective and efficient deep learning techniques for real-life problems,” he said.
The A*STAR researchers contributing to this study come from the Institute for Infocomm Research (I2R).