Oncologists may soon add artificial intelligence (AI) and computational platforms to their toolkits, which will be good news for the fight against some of the most difficult-to-treat tumour types such as colorectal tumours. The choice between the two resection options for colorectal tumours—endoscopic based resection including endoscopic mucosal resection (EMR) and endoscopic submucosal dissection (ESD) or conventional surgical resection—hinges on the depth of tumour invasion.
“Early colorectal cancers with an invasion depth of more than one millimetre into the submucosal layer have a higher risk of lymph node metastasis, which are contraindicated for ESD and EMR and require the more intensive surgery as a safety precaution,” explained Jiahao Wang from A*STAR’s Institute of Bioengineering and Bioimaging (IBB).
However, gauging the invasion depth of tumours in real-time and thus making the call on which surgery to choose can be a challenge for oncologists who don’t always have access to specialised diagnostic endoscopic equipment. Moreover, while early iterations of computer-aided diagnostics (CAD) have been developed, they are currently too inaccurate to be reliable.
Wang and IBB Senior Principal Investigator Hanry Yu set out to develop an effective alternative to help surgeons out in the operating theatre. The team built an AI-driven CAD which analyses images captured using white light colonoscopy (WLC), a powerful technique for classifying and visualising colon disease.
The team created a training model using over 7,000 WLC images, which were from 657 cancerous lesions that were marked up by trained cancer histologists. They then fed this data into a deep convolutional neural network constructed on GoogLeNet architecture by utilising contextual information inspired by relevant diagnostic processes by experts. This approach was selected so that the platform can be run efficiently in hospital settings with low computational resources.
Next, they validated their model with a testing dataset of over 1,600 WLC images, about 10 percent of which depicted deeply invasive colon cancer at early stage. The researchers reported that the new model was more accurate, sensitive and specific than existing technologies. Besides boasting an accuracy level of over 91 percent, the model could differentiate between superficial and deeply invasive colorectal tumours just as well as experienced endoscopists could.
Wang is hopeful that their innovation could one day help more cancer patients. “Even in situations where experienced doctors are unavailable, the algorithm can flag cancerous regions,” he said.
Nevertheless, some barriers still stand in the way of widespread clinical adoption. The current iteration was trained using still images, but it is uncertain how the algorithm will perform when presented with video feeds. Future studies in collaboration with other industry experts will aim to unlock the full potential of the platform, say the researchers.
The A*STAR-affiliated researchers contributing to this research are from the Institute of Bioengineering and Bioimaging (IBB).