In brief

Counterfactual dynamics forecasting allows artificial intelligence to predict hypothetical outcomes from real-time data, a feature which could revolutionise fields like video technology and healthcare.

© Unsplash

Making machines wonder

2 Nov 2023

Researchers teach artificial intelligence systems how to process information by mimicking human introspection.

What if you had accepted that job offer? Or if you had invested in that stock years ago? The ability to ponder on the infinite spectrum of hypothetical scenarios—a process known as counterfactual reasoning—is a natural part of the human experience.

In the world of artificial intelligence (AI), scientists are working on instilling a similar sense of inquisitiveness into machines. Until now, they’ve focused mostly on training AI to understand cause-and-effect type of relationships, but it’s much tougher to nudge AI to wonder, “What if?”

“Without a ‘mental’ framework and logical structure, it’s hard for AI to truly grasp counterfactual reasoning,” said Yanzhu Liu, a Scientist at A*STAR’s Institute for Infocomm Research (I2R) and Centre for Frontier AI Research (CFAR).

Liu led a research team to build a new framework based on structural causal models (SCM) which enables AI to predict how things would have evolved differently in a system if certain conditions or events differed from what actually occurred. Termed ‘counterfactual dynamics forecasting’, this technique lets AI study real events and then, drawing from them, forecast the progression of hypothetical situations over time.

“This new formulation not only quantifies relevance and dissimilarity in counterfactual reasoning but also lays the groundwork for integrating such reasoning into deep neural networks,” said Liu, adding that SCM uses both middle-level abstraction and low-level quantitation computation.

Liu and colleagues tested their SCM framework on two dynamical AI systems and found it to be effective. Their results suggest that in future, rather than just being a tool for repetitive tasks, AI can take more of an active role in supporting many different industries.

Consider health monitoring systems that collect data on vital signs and pre-emptively issue warnings based on potential health concerns. Likewise, in the sports arena, AI-based tools may evaluate an athlete’s potential performance by simulating actions they have yet to make—all rooted in real-time data.

With these exciting possibilities in view, the researchers have outlined future steps to amplify their study's impact. While their current work relies on simulated data, they intend to gather more large-scale, practical data from real-world scenarios to put the framework through more stringent tests. They are also addressing bottlenecks in AI training protocols.

“Currently, manually pairing up instances for training is a time-consuming task,” added Liu. “To streamline this, we are exploring ways in which the AI system can autonomously pinpoint and pair pertinent data segments, a process which can significantly enhance the speed and depth at which the system learns and evolves.”

The A*STAR-affiliated researchers contributing to this research are from the Institute for Infocomm Research (I2R) and Centre for Frontier AI Research (CFAR).

Want to stay up to date with breakthroughs from A*STAR? Follow us on Twitter and LinkedIn!


Liu, Y., Sun, Y. and Lim, J.-H. Counterfactual dynamics forecasting—a new setting of quantitative reasoning. The Thirty-Seventh AAAI Conference on Artificial Intelligence 37, 1764-1771 (2023).│ article

About the Researchers

Yanzhu Liu is a Scientist at A*STAR’s Institute for Infocomm Research (I2R). She obtained a PhD degree in computer science and engineering from Nanyang Technological University, Singapore. Her research interests include visual reasoning and neuro-symbolic AI.
Ying Sun received her BEng from Tsinghua University, her MPhil from Hong Kong University of Science and Technology, and her PhD in electrical and computer engineering from Carnegie Mellon University. She is currently a Principal Scientist at the Institute for Infocomm Research (I2R) and Centre for Frontier AI Research (CFAR). Her research interests include computer vision and machine learning, especially video analysis, visual representation learning, and visual reasoning.
View articles

Joo-Hwee Lim

Senior Principal Scientist III

Institute for Infocomm Research (I2R)
Joo-Hwee Lim is currently a Senior Principal Scientist III and the Head of the Visual Intelligence Unit at A*STAR’s Institute for Infocomm Research (I2R) and an Adjunct Professor at the School of Computer Engineering, Nanyang Technological University, Singapore. He received his BSc and MSc research degrees in Computer Science from the National University of Singapore and his PhD degree in Computer Science & Engineering from the University of New South Wales, Australia. He joined I2R in October 1990. His research experience includes connectionist expert systems, neural-fuzzy systems, handwritten recognition, multi-agent systems, content-based image retrieval, scene/object recognition and medical image analysis.

This article was made for A*STAR Research by Wildtype Media Group