
Brain recording techniques may have much
more to offer than previously thought, say researchers.
© Science Photo Library/Brand X Pictures/Getty Images
Many neurophysiological processes, such as memory, sensory perception and emotion, as well as diseases including Alzheimer’s, depression and autism, are mediated by brain regions located deep beneath the cerebral cortex. Techniques to non-invasively image millisecond-scale activity in these deep brain regions are limited. Now, an international team including A*STAR researchers has shown that magnetoencephalography (MEG) and electroencephalography (EEG) can be used to characterize fast timescale activity in these deep brain structures.
In a recent breakthrough, A*STAR’s Pavitra Krishnaswamy, in a research team spanning the United States, Sweden, and Finland, developed a statistical machine learning approach to resolve deep brain activity with high temporal and spatial resolution. The researchers used simulated test cases and experimental MEG/EEG recordings from healthy volunteers to demonstrate that their approach accurately maps out this deep brain activity amidst concurrent activity in cortical structures.
Deep brain activity typically generates weak MEG/EEG signals that are easily drowned out by louder signals arising from cortical activity. Therefore, characterizing the deep brain sources becomes akin to ‘picking out needles in a haystack’. Rather than solely relying on how ‘loud’ the brainwaves are, the team leveraged the fact that deep brain activity generates distinct spatial patterns across multiple MEG/EEG sensors positioned over the head.
The concept of sparsity — referring to how limited subsets of neurons across the brain ‘fire’ in sequential and coordinated patterns — also underpins the team’s research. If all of the cerebral cortex were active at the same instant, says Krishnaswamy, the cortical signal would “completely dwarf” that of the deep brain. However, when only a limited number of cortical regions are simultaneously active, it is possible to train an algorithm to see through into the deep brain. “When just a limited portion of the cortex is active, even though it appears louder than ongoing deep brain activity, it is possible to transform the data into a space where the deeper signals also have a distinct ‘voice’.”
Some of Krishnaswamy’s collaborators will now look to validate the approach for possible neuroscience and clinical applications. She will investigate ways to further develop such statistical machine learning approaches for adjacent applications in medical image analysis where the goals are to resolve low signal-to-noise features, enhance reconstruction quality and ultimately reduce diagnostic error.
The A*STAR-affiliated researcher contributing to this research is from the Institute for Infocomm Research.