Highlights

In brief

For autonomous vehicles to navigate safely on roads, they must be able to identify and interpret traffic light signals.

© Pixabay

Helping driverless cars see red (and green and amber)

27 Sep 2019

High dynamic range imaging combined with deep learning approaches could improve the ability of autonomous vehicles to recognize traffic lights.

Safety is of the utmost importance when it comes to allowing autonomous vehicles on roads. Besides knowing how to navigate amidst other vehicles and pedestrians on the road, self-driving cars must be able to stop at traffic junctions. To do so, they must first be able to recognize traffic lights quickly and accurately.

To increase the sensitivity and speed of present-day traffic light recognition systems, scientists Jian-Gang Wang and Lu-Bing Zhou from A*STAR’s Institute for Infocomm Research (I2R) developed a first-in-kind system that relies on deep learning to analyze images from a high dynamic range (HDR) camera.

“Current traffic light recognition systems rely on traffic light pictures derived from bright images. This could generate false signals from mimickers such as traffic signs, pedestrian clothing and rear lights of neighboring vehicles present in the same image,” said Wang.

The team thus used a dual-channel HDR camera that captures images of traffic lights in both high and low exposure settings. Similar to how our eyes can detect dimmer lights better in darker conditions, the dark channel serves as a filter for signals that could be falsely interpreted as traffic lights. The images taken at high and low exposure are then recombined so that rich image textures are retained.

“This process increases the overall speed of traffic light detection by reducing the number of light candidates that need to be verified by a convolutional neural network,” explained Wang, referring to a method of deep learning useful for analyzing visual data.

Going one step further, the researchers included temporal trajectory analysis in their traffic light recognition system, which takes into account the relative position and size of the traffic light as a vehicle moves. Collectively, these innovations resulted in a traffic light recognition accuracy of more than 96 percent on average, across a range of traffic light types and signals.

Compared to a state-of-the-art object detection approach that uses only bright images from a normal color camera, the researchers’ HDR method performed with greater precision, sensitivity and speed. They report that their traffic light recognition system has been integrated into an autonomous vehicle and was tested successfully on real roads.

Looking ahead, the team has plans to extend the use of this system in detecting vehicle signaling lights, and to test the feasibility of using this system under low light conditions, such as at night.

The A*STAR-affiliated researchers contributing to this research are from the Institute for Infocomm Research (I2R).

Want to stay up to date with breakthroughs from A*STAR? Follow us on Twitter and LinkedIn!

References

Wang, J. G. and Zhou, L. B. Traffic light recognition with high dynamic range imaging and deep learning. IEEE Transactions on Intelligent Transportation Systems 20, 1341-1352 (2019) | article

About the Researcher

Jian-Gang Wang received his PhD degree in computer vision from Nanyang Technological University, Singapore, in 2001. He is presently a Senior Scientist at the Institute for Infocomm Research (I2R). A paper he published in 2008 received the Pattern Recognition Journal’s Honorable Mention in 2010. He was also awarded the 1995 Chinese Academy of Sciences Award, China, the 2016 A*STAR Borderless Award, Singapore, and the 2016 Ministry of Trade and Industry Borderless Award, Singapore.

This article was made for A*STAR Research by Wildtype Media Group