Highlights

In brief

SPASCA, an interactive virtual companion with a customisable video avatar and a dialogue model trained on reminiscence therapy, delivers personalised engagement and adaptive interaction for persons living with dementia.

Photo by Freepik

A tireless friend in dementia care

16 Oct 2025

An AI-driven conversational assistant aims to bring new forms of companionship and emotional support to people living with cognitive decline.

Conversations with persons living with dementia (PLWD) can be challenging for caregivers. As the same questions come up again and again, patient responses might come easily at first; however, over time the repetition can be draining in a way that not only affects a caregiver’s wellbeing, but the depth of care they can provide, leaving PLWDs feeling more isolated.

What if a digital companion could step in to help provide round-the-clock conversation, comfort and support for PLWDs? That idea inspired the Social Presence and Support with Conversational Agent (SPASCA): an interactive virtual avatar jointly developed by the A*STAR Institute for Infocomm Research (A*STAR I2R), Singapore Management University (SMU) and Dementia Singapore.

At its core, SPASCA combines two artificial intelligence (AI) components: a dialogue model that learns how to personalise its conversations over time, and a digital ‘talking head’ avatar that can display empathy through facial expressions and gestures.

“Traditional AI responds the same way to everyone, but our system adapts like a skilled human caregiver—being more passionate for someone who is energetic, or calmer for someone who gets easily overwhelmed,” said Ali Köksal and Qianli Xu, respectively a Scientist and a Principal Scientist at A*STAR I2R.

SPASCA’s dialogue model—developed by SMU’s Jing Jiang and Kotaro Hara—was trained on recorded conversations between real-world physiotherapists and PLWDs during reminiscence therapy sessions.

“Being a small specialised model, SPASCA builds a memory bank of interactions with individual users to develop a conversational style suited to each one,” said Xu. “This lifelong learning capability means that the system—like a personal assistant—gradually attunes to a person’s unique needs, communication style and cultural background.”

Coupled with the dialogue model, SPASCA’s video generation arm includes an emotion recognition module that allows it to synthesise a human avatar with poses and facial expressions that not only adapt to ongoing conversations, but can be customised to each person’s unique needs, such as by taking on the likeness of a familiar caregiver.

“Achieving natural mouth movement, head gestures, blinking and believable facial expressions needed extensive engineering work, especially as the system needs to run smoothly on everyday computers,” said Köksal.

The team is currently evaluating SPASCA’s performance and efficacy with PLWDs and has developed three prototypes with varying voices and interaction styles. A surprising discovery was that some details that were emphasised during SPASCA’s development, such as accurate lip synchronisation, were less impactful for target users than expected.

“This experience helped us understand that supporting PLWDs requires focusing on the right details—not necessarily the most technically impressive ones,” said Köksal.

The team aims for SPASCA to complement human care, rather than replace it. “Ideally, SPASCA could act as a skilled nursing assistant that never tires, handling routine tasks so human caregivers can focus on what they do best,” Xu added.

The A*STAR-affiliated researchers contributing to this research are from the A*STAR Institute for Infocomm Research (A*STAR I2R).

Want to stay up to date with breakthroughs from A*STAR? Follow us on Twitter and LinkedIn!

References

Köksal, A., Gu, J., Hara, K., Jiang, J., Lim, J.-H., et al. SPASCA: Social presence and support with conversational agent for persons living with dementia. Proceedings of the Thirty-Ninth AAAI Conference on Artificial Intelligence, 29649-29651 (2025). | article

About the Researchers

Ali Köksal is a Scientist at the A*STAR Institute for Infocomm Research (A*STAR I2R), where he has been diving into machine learning and visual AI since 2022. He holds a PhD degree in computer science and engineering from Nanyang Technological University, Singapore, and earned both his Master’s and Bachelor’s degrees in computer engineering from the İzmir Institute of Technology, Türkiye. Köksal’s research explores generative models, style transfer, video generation and diffusion models, with a special interest in talking face synthesis. He is passionate about pushing the boundaries of visual understanding and making AI research more open, creative and fun.
Qianli Xu leads the Collaborative Visual Intelligence Group as a Principal Scientist at the A*STAR Institute for Infocomm Research (A*STAR I2R), where he specialises in human-AI collaboration, visual analytics and generation, and health informatics. His research aims to advance human-AI synergy for practical, widespread use in key sectors such as healthcare, education and manufacturing.

This article was made for A*STAR Research by Wildtype Media Group