Computers do more than just stream your favourite shows or run the most advanced video games. Powered by machine learning and well-trained neural networks, computers can also intelligently analyse images. In industrial production lines, this means they can automatically detect flaws and assess product quality by sight alone without the need for manual human labour.
However, like all machine learning algorithms, defect detection methods need to be trained on a large amount of data to perform well and consistently. Insufficient training data, such as usually the case in real-world industrial settings where defective products are relatively rare, can cripple the algorithm’s capacity to accurately detect defects.
These limitations may soon be overcome, as researchers from A*STAR have developed a new convolutional neural network, dubbed Class Activation Map Guided U-Net (CAM-UNet), to help train defect detection methods with limited amounts of data.
“There is a gap between the data condition of industrial applications and deep learning research,” said lead researcher Dongyun Lin, a research scientist at A*STAR’s Institute for Infocomm Research (I2R). “Our work targets this gap and aims to determine how to achieve highly accurate defect segmentation performance.”
To overcome the need for large datasets, CAM-UNet is trained to classify normal defect-free images and a small number of annotated anomalous images. Then, it generates the regions of the object that appear to have defects, to create what is called dubbed class activation maps (CAMs) that can guide future analyses in identifying defective regions.
According to Lin, this approach is particularly useful for industrial settings, where defective data is limited.
“CAM-UNet can be trained under such real-world industrial scenarios and is capable of producing superior defect detection and identification performance,” Lin said, adding that CAM-UNet can potentially generate solutions for quality assessments using limited amounts of defective data.
The researchers emphasise that more work is still needed before CAM-UNet can be used in the real world. For instance, it still struggles to accurately assess products against a messy backdrop, as the extra detail makes classification more challenging.
“Also, the method should be tested on more real-world industrial data before being applied for practical uses,” Lin concluded.
The A*STAR-affiliated researchers contributing to this research are from the Institute for Institute for Infocomm Research (I2R).