A robot with a navigation system that mirrors the neural scheme used by humans and animals to find their way around has been developed by A*STAR researchers1.
The human navigation function is operated by two types of brain cells — place cells and grid cells. Place cells become active in the brain when we recognize familiar places, while grid cells provide us with an absolute reference system, so we can determine exactly where we are on a map.
The way sailors used to navigate through tracking of relative movement, however, is essential for finding a way through unfamiliar areas, explains Miaolong Yuan from the A*STAR Institute for Infocomm Research team. “A sailor will use cues such as the stars or landmarks to determine where their ship is on a map, and then, as the ship moves, will update its location on the map by observing only speed and direction.”
The human brain uses grid cells, which provide a virtual reference frame for spatial awareness to handle this type of relative navigation. Each time we move through and pass one of the virtual grid points that the brain has set up, the respective grid cell becomes active, and we know our relative movement in relation to those coordinates. By using both place and grid cells for navigation, humans and animals are able to accurately move through the environment.
Yuan and the team have implemented the same neural scheme for robots, using computer programs that simulate the activity of place and grid cells in the brain. Crucial to the computational algorithm is the strength of the feedback mechanism between the grid cells and place cells, and the calibration of the visual signals is integral to the map building process of the computer algorithm.
The algorithm was tested in a robot (see image) that explored a 35 meter x 35 meter indoor office environment. The robot was able to detect loops in the path through the office space and, by using visual cues to recognize areas visited repeatedly, built its own neurological map of the office. The computer navigation system assists the robot in situations where it is lost in a new environment, says Yuan. “Cognitive maps can help the robot when it is lost, because they can provide global topological information of the navigating environment to help the robot localize itself.”
The A*STAR-affiliated researchers contributing to this research are from the Institute for Infocomm Research
- Yuan, M., Tian, B., Shim, V. A., Tang, H. & Li, H. An entorhinal-hippocampal model for simultaneous cognitive map building. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, 586–592 (2015). | article