Features

In brief

A*STAR Computing and Information Science scholar, Sidney Suen, speaks about his aspirations to bring culturally-nuanced ‘common sense’ to artificial intelligence frameworks, giving future systems a better sense of humanity’s diversity.

© A*STAR Research

A byte of culture

21 Sep 2023

To develop artificial intelligences (AIs) with a better sense of humanity’s quirks, A*STAR Computing and Information Science scholar Sidney Suen is delving into the cross-cultural aspects of common sense and how they might be applied to AI frameworks.

For many of us today, artificial intelligence (AI) touches many facets of our daily lives. Whether it’s asking a virtual home assistant to turn the lights on and play our favourite music, or using ‘smart composing’ text tools to help us complete emails and phone messages, our interactions with AI generally make everyday tasks more efficient.

Thanks to new advances, AIs can also help humans swiftly perform many sophisticated tasks. Businesses can use AI-powered sentiment analysis tools to systematically comb through online reviews, teasing out how customers feel about their brand. With AI, scientists can compute brain-bending algorithms at unprecedented speeds, opening the door to new breakthroughs in medicine and climate research.

However, AI often falls surprisingly short in what we think of as common sense—the simple, often implicit facts and reasoning about the everyday world that we factor in when making decisions. For example, we might think it obvious that ‘people sleep when they are tired’ or that ‘lemons are sour’—but those ideas might not be obviously spelled out in a dataset used to train an AI.

Adding to the problem is the fact that common sense can also be culturally specific. The thumbs-up gesture carries positive connotations in many countries but offensive ones in others. Likewise, a smile with teeth might be seen as friendly to some people, but aggressive to others. Thus, to train an AI that better understands what words and actions can signify, it needs a sense of the social and cultural norms tied to them.

The idea of developing AI systems with a sense of culture is one that captured the interest of Sidney Suen, an A*STAR Computing and Information Science (ACIS) scholar at Nanyang Technological University (NTU). In this interview with A*STAR Research, Suen shares how he entered the field of cross-cultural commonsense knowledge and reasoning in AI, as well as the potential opportunities and challenges involved.

Q: Tell us about your journey in AI research.

My exploration of AI began in my fourth year at the Singapore University of Technology and Design (SUTD), my alma mater. At the time there were articles claiming that AI could be used to play video games, such as Dota 2, at a level that could beat professional players.

I wanted to do some form of scientific research but felt that I needed some time to prepare for it, so I decided to send in an application to A*STAR to work as a research engineer for a couple of years before considering graduate school. I got the chance to work at A*STAR’s Cognitive Human-like Empathetic and Explainable Machine-learning (CHEEM) programme, a research initiative focused on developing human-like AI.

It was at CHEEM that I had the opportunity to meet and collaborate with many passionate scientists and engineers, and where I discovered my research interest in commonsense knowledge and reasoning for AI, with an emphasis on the former. After my time there, I applied for the ACIS scholarship, which provided me the opportunity to pursue a PhD with NTU's School of Computer Science and Engineering, as well as a research attachment with A*STAR’s Institute of High Performance Computing (IHPC).

Q: What sparked your interest in cross-cultural knowledge in AI?

When I was examining the scientific literature in commonsense, I noticed one issue highlighted was how the cultural aspects of commonsense knowledge in AI were still an open research problem. What piqued my particular interest was that idea of commonsense knowledge being cultural in nature.

For example, the action of putting a tissue paper on a table to reserve it—colloquially known as ‘chope-ing seat’—is a uniquely Singaporean gesture that locals would recognise at a glance. However, a visitor without background knowledge of local culture might not understand its significance.

Another example would be if one person asks another if they ‘want to Grab or Gojek back’. The implied meaning here is a request to share a taxi ride home, using the local context that Grab and Gojek are ride-hailing services based in Southeast Asia.

Q: Why does exploring cultural commonsense matter in AI?

For one, when it comes to AI technologies meant to interface with people of different cultural backgrounds, it might be useful for those AIs to be cognizant of social and cultural norms. This would allow them to better serve people’s needs and avoid social faux pas when deployed.

Another thing is that a current paradigm of AI is the use of large language models which are pretrained on massive corpora of data from the internet. Challenges could arise from complex tasks that require commonsense knowledge, because it’s an implicit kind of knowledge; it exists in a dimension outside a corpus of raw text.

There’s also the fact that cultural information isn’t limited to text. Visual information (like gestures) and aural information (like accents) can tell you a lot about the person you’re interacting with. The scope of my research is primarily concerned with natural language as that modality alone has many aspects of cultural nuance to be explored, such as metaphors and colloquial expressions.

Q: What are some challenges in developing AI systems that understand culture?

To computationally model cross-cultural knowledge (which, as mentioned, can be anecdotal in nature) at scale is still a relatively open problem. The nature of culture itself is very hard to pin down. Many dimensions such as religion, ethnicity and the region of one’s upbringing (e.g., South versus North India) can intersect and interact with each other.

Furthermore, culture is fluid and can evolve over time; this means that the cultural knowledge available to an AI system can become dated. This ties into a phenomenon known as ‘concept drift’, where changes in the data might cause an AI model trained on a previous version of the data to become inaccurate.

Ethical challenges can also occur when AI systems apply cultural knowledge, like the risk of cultural knowledge being used to stereotype people. Nonetheless, there is substantial research being done to mitigate social bias in AI.

Q: Could you describe a current project you are working on?

Currently I’m conducting a literature review of sentiment analysis research in various languages used on the internet. The aim is to analyse the current state of research in each of those languages and identify any potential areas of innovation.

In this area, culture plays a part as well. For example, some studies note that Japanese users tend to be reluctant to provide negative or strongly-worded opinions in product reviews, possibly due to cultural reasons. Consequently, the next step would be to investigate how such insights could be leveraged to enhance sentiment analysis.

Q: How might your work advance Singapore’s future AI capabilities?

I hope to develop a framework that can help AIs automatically acquire and distil cultural knowledge. The next stage would be to develop models that can apply that cultural knowledge to a downstream task, such as cyberbullying detection in social media, or sentiment analysis of forums that contain cultural expressions. An example is Hardware Zone, a Singapore-based internet portal for the discussion of tech-related topics.

Ultimately, the grand vision (so to speak) is to design an AI cultural-knowledge framework that could be applied across many different cultures.

Q: What advice would you offer young STEM talents interested in AI?

As a senior once told me, it’s good to socialise with scientists and various stakeholders in industry, as this can provide opportunities for discovering new ideas or breakthroughs. On the technical side, it’s also important to scrutinise the details of how various AI models are implemented to better understand what features or modules are responsible for providing the models’ functionalities.

Want to stay up to date with breakthroughs from A*STAR? Follow us on Twitter and LinkedIn!

This article was made for A*STAR Research by Wildtype Media Group