What if you had accepted that job offer? Or if you had invested in that stock years ago? The ability to ponder on the infinite spectrum of hypothetical scenarios—a process known as counterfactual reasoning—is a natural part of the human experience.
In the world of artificial intelligence (AI), scientists are working on instilling a similar sense of inquisitiveness into machines. Until now, they’ve focused mostly on training AI to understand cause-and-effect type of relationships, but it’s much tougher to nudge AI to wonder, “What if?”
“Without a ‘mental’ framework and logical structure, it’s hard for AI to truly grasp counterfactual reasoning,” said Yanzhu Liu, a Scientist at A*STAR’s Institute for Infocomm Research (I2R) and Centre for Frontier AI Research (CFAR).
Liu led a research team to build a new framework based on structural causal models (SCM) which enables AI to predict how things would have evolved differently in a system if certain conditions or events differed from what actually occurred. Termed ‘counterfactual dynamics forecasting’, this technique lets AI study real events and then, drawing from them, forecast the progression of hypothetical situations over time.
“This new formulation not only quantifies relevance and dissimilarity in counterfactual reasoning but also lays the groundwork for integrating such reasoning into deep neural networks,” said Liu, adding that SCM uses both middle-level abstraction and low-level quantitation computation.
Liu and colleagues tested their SCM framework on two dynamical AI systems and found it to be effective. Their results suggest that in future, rather than just being a tool for repetitive tasks, AI can take more of an active role in supporting many different industries.
Consider health monitoring systems that collect data on vital signs and pre-emptively issue warnings based on potential health concerns. Likewise, in the sports arena, AI-based tools may evaluate an athlete’s potential performance by simulating actions they have yet to make—all rooted in real-time data.
With these exciting possibilities in view, the researchers have outlined future steps to amplify their study's impact. While their current work relies on simulated data, they intend to gather more large-scale, practical data from real-world scenarios to put the framework through more stringent tests. They are also addressing bottlenecks in AI training protocols.
“Currently, manually pairing up instances for training is a time-consuming task,” added Liu. “To streamline this, we are exploring ways in which the AI system can autonomously pinpoint and pair pertinent data segments, a process which can significantly enhance the speed and depth at which the system learns and evolves.”
The A*STAR-affiliated researchers contributing to this research are from the Institute for Infocomm Research (I2R) and Centre for Frontier AI Research (CFAR).