Highlights

In brief

Artificial electroencephalography data created by generative adversarial networks are helping to make brain-computer interfaces more accurate

© Shutterstock

Making fake brain waves more realistic

20 Jul 2021

A new generative adversarial network-based framework generates artificial brain wave data that improves classification for better brain-computer interface performance.

From devices that help stroke victims regain function of their limbs to novelty cat ears you move with your mind, the promise and wonder of brain-computer interface (BCI) technology are quickly becoming a reality. Despite the rapid progress, however, the performance of BCI technology continues to be limited by the large amounts of high-quality brain wave data needed to train classification algorithms, which decode the real-world data into a format that computers can use.

One issue is that tasks employed to collect brain wave data—typically measured using electroencephalography (EEG)—are often unrealistic. “Many BCI experiments rely on controlled conditions in which subjects are instructed to fully focus on the main task,” said Kai Keng Ang, a Senior Scientist at A*STAR’s Institute for Infocomm Research (I2R). “However, this is different from what normally happens in real-life situations where various internal and external factors can make it difficult to stay focused on the task.”

EEG data also vary from subject to subject and session to session, making it impractical to measure enough high-quality data from human subjects and difficult to generate artificial data using conventional models, Ang added.

In a new study, Ang collaborated with corresponding author Cuntai Guan of Nanyang Technological University to address these issues, designing a new framework to generate artificial EEG data that can be used to augment real training data for classification. Based on a type of neural network called deep convolutional generative adversarial network (DCGAN), the framework is trained on EEG data measured from subjects performing a task to detect movement intention, either while being completely focused (to simulate controlled conditions) or distracted (to resemble real-life scenarios).

In addition to real training data, the DCGAN-based framework also learns from subject-specific variables, which enables subject-specific artificial EEG data to be generated. “This will significantly reduce the calibration time when tailoring a BCI system to a new user,” Ang explained.

The researchers, including study first author Fatemeh Fahimi, who was previously a postdoctoral researcher at I2R, also generated artificial EEG data using two benchmark methods for comparison. Compared to real EEG data alone, artificially augmented EEG data was able to produce more accurate classification results, especially under the real-life, distracted scenario.

“The improvement in accuracy suggests that with effective artificial EEG data generation, we can achieve high performance without undergoing a long calibration session to obtain more EEG data,” Ang concluded.

The A*STAR-affiliated researchers contributing to this research are from the Institute for Infocomm Research (I2R).

Want to stay up to date with breakthroughs from A*STAR? Follow us on Twitter and LinkedIn!

References

Fahimi, F., Dosen, S., Ang, KK., Mrachacz-Kersting, N., Guan, C. Generative Adversarial Networks-Based Data Augmentation for Brain-Computer Interface. IEEE Transactions on Neural Networks and Learning Systems, 1–13 (2020) | article

About the Researcher

Kai Keng Ang is currently the Leader of the Signal Processing Group and a Senior Principal Scientist I with the A*STAR Institute for Infocomm Research (A*STAR I2R). He is also an Adjunct Associate Professor at the School of Computer Science and Engineering, Nanyang Technological University, Singapore. His current research interests include brain-computer interfaces, computational intelligence, machine learning, pattern recognition and signal processing.

This article was made for A*STAR Research by Wildtype Media Group