Reading between the lines

December 20, 2018

Smart software takes the emotional temperature of comments made online, and even spots sarcasm.

Dec 20, 2018

Reading between the lines

Understanding the real meaning of comments left online is difficult for machines, but A*STAR is building clever programs that can spot important subtleties, including intensity and sarcasm.

© Getty Images/smartboy10

Smart software takes the emotional temperature of comments made online, and even spots sarcasm.

Font style:

Are the trains running on time today or might the bus be quicker? Is that new restaurant near work any good for lunch? Is the latest blockbuster movie worth the price of a ticket?

Sarcasm is very hard for computers to detect as sarcastic comments use many of the same words and language structures as positive comments. A*STAR's Crystalace program helps computers detect sarcasm. Above is a map that the Crystalace team have produced. It shows the most sarcastic countries based on word profiles from tweets.

© A*STAR Institute of High Performance Computing

Better ask the internet – social media platforms such as Twitter and Facebook are awash with opinions on almost any topic. And online, people tend to be more frank and forthcoming than in person. For many, tapping the wisdom of the crowd to seek advice has become a big part of daily life.

Opinion and sentiment expressed online can also be invaluable for organizations — from the smartphone manufacturer keen to know how its new product is being received, to a government department seeking feedback on a policy. Such is the torrent of data now flooding social media platforms, that for most organizations the biggest challenge is trying to extract useful insights from it, rather than being simply swamped.

The tools developed by Yang Yinping, and her colleagues at the A*STAR Institute of High Performance Computing (IHPC), deal with this deluge. Over the past five years, Yang and her multidisciplinary team have developed algorithms that can extract and categorize, with fine-grained detail, the emotional content of posts such as tweets.

That emotion-level analytic capability is critical, Yang says, pointing out that merely gleaning negativity is not useful. Only once you can discern whether the negativity is anger or anxiety, for example, can you effectively respond. “If someone in front of you is angry, you would use a different communication strategy than if they were worried or anxious,” Yang says.

It’s the same with online communication. “In 2017, we released an accurate and scalable sentiment analysis tool that has already been licensed and used by a number of companies,” Yang says. She adds that SentiMo, the tool, will soon be followed by more products that are already in the pipeline.

Sentiment analysis

It was around 2013 that Yang and her team’s interest in sentiment analysis was first sparked. Online platforms such as Twitter and Facebook were really taking off and there was a phenomenally large volume of social media data available, she recalls. Public facing organizations were having trouble handling this new communication stream.

To address this, a new team with a broad collective skillset was formed within the IHPC. “The team composition is a very interesting dynamic of multidisciplinary researchers including data analysts, statisticians, psychologists, and behavior researchers,” Yang says.

The SentiMo Team while showcasing an early system demo at CommunicAsia, with organizers Brenda Lork and Billy Teo. Left to right: Landy Lan, Nie Maowen, Wang Zhaoxia, Chong Chee Seng, Brenda Lork, Billy Teo, Praveena Satkunarajah and Yang Yinping.

© A*STAR Institute of High Performance Computing

The team initially tried to analyze tweets about one of Singapore's transport services with existing software, such as open-access Stanford-produced natural language processing tools. But the results were not satisfactory, Yang says. Negative tweets tended to be wrongly classified as neutral, for instance. The team took on the challenge and developed a sentiment analysis tool of its own, called SentiMo.

To create the program, the team’s efforts ranged from constructing their own sentiment lexicons — specialized dictionaries that label words as positive or negative — to devising ‘sentence decomposers’ to deconstruct text into easier-to-handle blocks, trying to find linguistic rules to handle special words. “We also included things like emoticons and latest social media slang that you don’t find in the Oxford dictionary,” says Yang. “Over an 18-month development period, we coded and tested these rules, until adding more rules didn’t significantly increase the performance/"

The program, released in full in March 2017, can handle formal and informal English language text, and can accurately tag a text message with fine-grained sentiment categories (such as positive, negative, neutral, and mixed).

RELATED ARTICLES

Awesome, thanks Twitter!

For tweets written in relatively straightforward language, SentiMo worked well. However, some social media expressions do not mean what they say.

Tweets are among the text inputs that the A*STAR Institute of High Performance Computing are analysing for sentiment. 

© SeongJoon Cho/Bloomberg via Getty Images

Take one tweet sent to a major US airline, says Yang. “Hey just wanna send a big thanks to #[airline name] for overbooking my flight, and now we have no pilot.” The tweet contains strongly positive words, but the actual sentiment implied is the exact opposite. “Addressing the case of sarcasm is a very challenging and nuanced task,” she says. To overcome the problem, the team needed a new strategy.

To develop the SentiMo program the team coded a hard set of rules for the algorithm to apply, but sarcasm demanded a different approach. “People are so creative, using all different forms of language, even creative hashtags. For the sarcasm detection, we found we could not think of any rules,” Yang says.

Yang’s main collaborator is fellow IHPC researcher, Raj Kumar Gupta. “We work very closely together,” Yang says. “I work more as a behavioral scientist; he is the systems guy with a skill set of data science, programming, coding and machine learning.” So Yang and Gupta turned to machine learning, and built a program that taught itself to spot sarcasm1.

“Natural languages are very complex, and sarcasm makes the whole task more difficult,” Gupta agrees. Even if you did try to develop a set of rules to detect sarcasm, it would be virtually impossible to capture this complexity, as context can completely change the meaning of a word, he says. “On the other hand, machine learning approaches are data driven and learn these complex relationships between the words automatically then use it to infer new observations,” he says.

Yang Yinping

People who use sarcasm tend to be cognitively complex, which means they also tend to use longer words, says Yang Yinping

For her part, Yang built a new over 3,000-word lexicon that not only classified emotion-associated English words as positive or negative, but also applied more fine-grained scores of the intensity and strength underlying the words. A word like ‘content’ or ‘happy’ would score 1 out of 3 in its intensity of its expression of joy. ‘Thrilled’ or ‘excited’ would score 3 out of 3, she explains. Similarly, anger can range from annoyance to rage. Common idioms and emoticons were also included.

The team then used a collection of sarcastic tweets from which the machine learning algorithm could teach itself to detect key indicators of sarcasm, based on a set of features the team designed and tested.

Yang’s fine-grained capture of emotional intensity actually enhanced sarcasm detection, the team discovered. The use of high-intensity positive words, in combination with a negative situation, turned out to be a prevalent feature of sarcasm — which the machine learning algorithm learned to pick up on. “People use moderate or high intensity joy words, like ‘What a greeeat thing to come to a lecture on Saturday morning’, or ‘I really love it when I babysit at midnight’,” Yang says.

Capturing cognitive linguistic features also enhanced sarcasm detection, the team discovered. People who use sarcasm tend to be cognitively complex, which means they also tend to use longer words. “The proxy is to use the count of words of more than six letters,” she says.

In benchmarking tests, Yang and Gupta’s program identified sarcasm more accurately than the best previously published example, which came from a university in the United States in 2013. Because sarcasm is often used to convey negative emotion or attitudes, if the tool worked well it should improve the overall performance of sentiment analysis programs –which is exactly what the team observed when they used their sarcasm detection in conjunction with AlchemyLanguage, a commercial sentiment analysis tool recently incorporated into IBM’s Watson Natural Language Understanding service.

The Crystalace sarcasm detection team (L—R) Raj Kumar Gupta, Lu Zhi Hui and Yang Yinping.

© A*STAR Institute of High Performance Computing

Yang, Gupta and the team launched their Crystalace demo website in August 2017, which delves into the program's theoretical underpinnings, scripts and lexicon and also allows users to test with their own text for sarcasm. “Our current adopters are mostly analysts in consultancy companies and digital analytics companies,” Yang says. “The most prominent use is in the form of media consultancy, such as evaluating the public’s response to social policy.” Most recently, one group of consultants were so inspired by the tool that they established a start-up company to offer advanced social monitoring services, she adds.

In October 2018, the team announced they had partnered with Singapore Press Holdings (SPH) to develop a tool to optimize news headlines, assessing the emotion-related information conveyed by headlines. The researchers will identify elements of news headlines that are correlated with article popularity, then develop a tool to analyze potential headlines to predict which will attract most readers.

“Headlines are a key factor in attracting readers to delve into an article,” says Anthony Tan, deputy CEO of SPH. “This project will help our newsrooms better understand the emotional impact of different words and phrases used in headlines, so we can improve our engagement with readers,” he says.

RELATED ARTICLES

Digital emotions

Meanwhile, Yang, Gupta and the team are applying their skills to other forms of online communication: audio and video streams. “We are integrating our text-to-emotion analysis capabilities with other A*STAR technologies, including automatic speech recognition, machine translation, acoustic analysis, and also facial expression recognition,” she says.

The combined tool will analyze audio and video for language used, tone of voice and facial expression used to gain even more insight into sentiment and emotion expression. “It requires a lot of very specialized expertise to handle video data. This is ongoing research, attracting lots of interest, and we believe more exciting tools will be released early next year,” Yang says.

The A*STAR-affiliated researchers contributing to this research are from the Institute of High Performance Computing.  For more information about the team’s research, please visit the Social Intelligence Group webpage.

Want to stay up-to-date with A*STAR’s breakthroughs? Follow us on our Twitter!

References

  1. Gupta, R.K. & Yang, Y. CrystalNest at SemEval-2017 Task 4: Using Sarcasm Detection for Enhancing Sentiment Classification and Quantification. Proceedings of the 11th International Workshop on Semantic Evaluation, 626—633 (2017) | article