Bumps, flare-ups and redness—we identify skin conditions such as dermatitis as their symptoms manifest on the skin. However, researchers say tomorrow’s dermatological technologies could make more accurate diagnostics by ‘listening’ for changes beneath the surface of the skin.
Raster-scanning optoacoustic mesoscopy, or RSOM, is a painless way of extracting more information from the skin than conventional light-based techniques. “The acoustic pressure wave is detected by a transducer that is placed just above the skin surface,” explained Malini Olivo, who heads the Laboratory of Bio-optical Imaging at A*STAR’s Institute of Bioengineering and Bioimaging (IBB). These sound waves travel through the tissue and are picked up as electrical signals before being processed into images.
Next, machine learning (ML) tools can be used by dermatologists to survey thousands of RSOM image data points to distinguish healthy and diseased skin. “ML models can extract important image features automatically and use these features for subsequent processing,” Olivo said, adding that this approach could benefit patients with difficult-to-diagnose conditions such as atopic dermatitis (AD).
Together with researchers from the National Skin Centre and A*STAR’s Bioinformatics Institute (BII), Olivo spearheaded a project that integrated ML analytical tools with RSOM for detecting atopic dermatitis efficiently and accurately. To build their training dataset, the team recruited over 70 participants, including healthy individuals and those with AD. The researchers captured three-dimensional images of the participants’ skin with RSOM, which were graded based on AD severity by an experienced dermatologist.
The dataset with dermatologist scores was then used to train three ML models: random forest (RF), support vector machine (SVM), and convolutional neural network (CNN). These models span a variety of computational capabilities, with CNN performing the most comprehensive analysis (at the cost of additional computational time). Parameters such as blood volume, epidermal thickness and the ratio of bigger and smaller vasculature were also implemented in the training to optimise accuracy.
The group found that all three models could tell AD apart from healthy skin with up to 97% accuracy. Additionally, the random forest model could classify images from AD patients as being either mild or moderate to severe, with an accuracy of about 65%.
A bigger, more balanced training dataset could help improve these metrics, said Hwee Kuan Lee, a Senior Principal Investigator at BII’s Imaging Informatics division. “The limited data set is our main challenge, especially the unbalanced data of different AD severities which can easily lead to overfitting,” Lee added, explaining that while 26 patients had moderate AD, only eight patients had severe AD, throwing off the data balance.
The team is working on improving the sensitivity of their ML models and expanding the application landscape to include other inflammatory skin diseases. “We will also explore other deep learning frameworks to realise automatically useful feature extraction and high accuracy classifications,” Olivo shared.
The A*STAR-affiliated researchers contributing to this research are from the Institute of Bioengineering and Bioimaging (IBB) and the Bioinformatics Institute (BII).