Highlights

In brief

The DehazeGAN model is a generative adversarial network that can recover clean images from hazy ones.

© A*STAR Institute of High Performance Computing

Seeing clearly through the haze

2 Nov 2020

A promising new computational model can recreate clean images out of hazy ones.

If you’ve ever spent a day out in the haze, you might have noticed how difficult it is to see anything. But your human eyes aren’t the only ones that struggle in such conditions—digital vision sensors and computer vision algorithms take a hit too, with potentially serious implications for systems that rely on a clear vision, such as video surveillance cameras or autonomous vehicles.

Because the smoke or dust particles that make up haze create a kind of non-additive noise, hazy images can’t be resolved with just simple contrast enhancement methods. Instead, haze removal relies on the accurate estimation of two factors: global atmospheric light and the transmission map, which is the path of light that is not scattered.

Unlike most existing methods that estimate these two parameters separately, which reduces efficiency and accuracy, researchers led by Hongyuan Zhu, a Research Scientist at the A*STAR Institute for Infocomm Research (I2R) on secondment to the Institute of High Performance Computing (IHPC), have created a new model that applies a generative adversarial network (GAN) to single-image dehazing for the first time.

With their two-network architecture, GANs can be used to produce high-quality images in tasks such as image generation and object detection. The resulting model, aptly named DehazeGAN, has been shown to reliably recover clean images from hazy ones and outperform state-of-the-art methods.

“The noise created by haze is material and distance-dependent, according to the atmospheric scattering model. DehazeGAN is the first end-to-end method that solves the image 'dehazing' problem by embracing this model,” Zhu said.

DehazeGAN’s success lies in its two components: a novel compositional generator, which enables DehazeGAN to directly learn the physical parameters from data, and a novel deeply supervised discriminator, which ensures clean image output.

“Our method achieves superior performance in all metrics thanks to physical modeling and adversarial learning,” he shared. “Moreover, it models the recovery process as a highly efficient, fully convolutional neural network with real-time performance.”

According to the Zhu, DehazeGAN can be used to enhance the quality of vision sensors in autonomous vehicles or mobile phones, as well as the robustness and accuracy of existing computer vision systems under adverse weather conditions.

To test DehazeGAN’s performance, the researchers created the HazeCOCO dataset of synthesized haze images, which they have shared for use in other single-image 'dehazing' efforts.

“HazeCOCO is currently the largest haze dataset with various diverse visual patterns for learning discriminative 'dehazing' features, which can benefit further research in this field,” said Zhu.

The A*STAR-affiliated researchers contributing to this research are from the Institute for Infocomm Research (I2R) and Institute of High Performance Computing (IHPC).

Want to stay up to date with breakthroughs from A*STAR? Follow us on Twitter and LinkedIn!

References

Zhu, H. et al. Single-Image Dehazing via Compositional Adversarial Network. IEEE Transactions on Cybernetics 10.1109 (2019) | article

About the Researcher

Hongyuan Zhu received a PhD degree from Nanyang Technological University, Singapore in 2015 for his research into multi-modal deep learning. He is currently a Research Scientist at the A*STAR Institute for Infocomm Research (I2R), and on secondment at the A*STAR Institute for High Performance Computing (IHPC). He currently leads the Multi-modal Perceptual & Reasoning Team focusing on multi-modal scene perception, learning and reasoning. Zhu’s research has been published in many conference proceedings, including the Conference on Computer Vision and Pattern Recognition (CVPR), and he has served as the guest editor of IET Image Processing 2019.

This article was made for A*STAR Research by Wildtype Media Group