EEG
Latest
An Ecological and Objective Neural Marker of Implicit Unfamiliar Identity Recognition
We developed a novel paradigm measuring implicit identity recognition using Fast Periodic Visual Stimulation (FPVS) with EEG among 16 students and 12 police officers with normal face processing abilities. Participants' neural responses to a 1-Hz tagged oddball identity embedded within a 6-Hz image stream revealed implicit recognition with high-quality mugshots but not CCTV-like images, suggesting optimal resolution requirements. Our findings extend previous research by demonstrating that even unfamiliar identities can elicit robust neural recognition signatures through brief, repeated passive exposure. This approach offers potential for objective validation of face processing abilities in forensic applications, including assessment of facial examiners, Super-Recognisers, and eyewitnesses, potentially overcoming limitations of traditional behavioral assessment methods.
Using Fast Periodic Visual Stimulation to measure cognitive function in dementia
Fast periodic visual stimulation (FPVS) has emerged as a promising tool for assessing cognitive function in individuals with dementia. This technique leverages electroencephalography (EEG) to measure brain responses to rapidly presented visual stimuli, offering a non-invasive and objective method for evaluating a range of cognitive functions. Unlike traditional cognitive assessments, FPVS does not rely on behavioural responses, making it particularly suitable for individuals with cognitive impairment. In this talk I will highlight a series of studies that have demonstrated its ability to detect subtle deficits in recognition memory, visual processing and attention in dementia patients using EEG in the lab, at home and in clinic. The method is quick, cost-effective, and scalable, utilizing widely available EEG technology. FPVS holds significant potential as a functional biomarker for early diagnosis and monitoring of dementia, paving the way for timely interventions and improved patient outcomes.
Neural makers of lapses in attention during sustained ‘real-world’ task performance
Lapses in attention are ubiquitous and, unfortunately, the cause of many tragic accidents. One potential solution may be to develop assistance systems which can use objective, physiological signals to monitor attention levels and predict a lapse in attention before it occurs. As it stands, it is unclear which physiological signals are the most reliable markers of inattention, and even less is known about how reliably they will work in a more naturalistic setting. My project aims to address these questions across two experiments: a lab-based experiment and a more ‘real-world’ experiment. In this talk I will present the findings from my lab experiment, in which we combined EEG and pupillometry to detect markers of inattention during two computerised sustained attention tasks. I will then present the methods for my second, more ‘naturalistic’ experiment in which we use the same methods (EEG and pupillometry) to examine whether these markers can still be extracted from noisier data.
Diagnosing dementia using Fastball neurocognitive assessment
Fastball is a novel, fast, passive biomarker of cognitive function, that uses cheap, scalable electroencephalography (EEG) technology. It is sensitive to early dementia; language, education, effort and anxiety independent and can be used in any setting including patients’ homes. It can capture a range of cognitive functions including semantic memory, recognition memory, attention and visual function. We have shown that Fastball is sensitive to cognitive dysfunction in Alzheimer’s disease and Mild Cognitive Impairment, with data collected in patients’ homes using low-cost portable EEG. We are now preparing for significant scale-up and the validation of Fastball in primary and secondary care.
Disentangling neural correlates of consciousness and task relevance using EEG and fMRI
How does our brain generate consciousness, that is, the subjective experience of what it is like to see face or hear a sound? Do we become aware of a stimulus during early sensory processing or only later when information is shared in a wide-spread fronto-parietal network? Neural correlates of consciousness are typically identified by comparing brain activity when a constant stimulus (e.g., a face) is perceived versus not perceived. However, in most previous experiments, conscious perception was systematically confounded with post-perceptual processes such as decision-making and report. In this talk, I will present recent EEG and fMRI studies dissociating neural correlates of consciousness and task-related processing in visual and auditory perception. Our results suggest that consciousness emerges during early sensory processing, while late, fronto-parietal activity is associated with post-perceptual processes rather than awareness. These findings challenge predominant theories of consciousness and highlight the importance of considering task relevance as a confound across different neuroscientific methods, experimental paradigms and sensory modalities.
Do we measure what we think we are measuring?
Tests used in the empirical sciences are often (implicitly) assumed to be representative of a target mechanism in the sense that similar tests should lead to similar results. In this talk, using resting-state electroencephalogram (EEG) as an example, I will argue that this assumption does not necessarily hold true. Typically EEG studies are conducted selecting one analysis method thought to be representative of the research question asked. Using multiple methods, we extracted a variety of features from a single resting-state EEG dataset and conducted correlational and case-control analyses. We found that many EEG features revealed a significant effect in the case-control analyses. Similarly, EEG features correlated significantly with cognitive tasks. However, when we compared these features pairwise, we did not find strong correlations. A number of explanations to these results will be discussed.
Enhanced perception and cognition in deaf sign language users: EEG and behavioral evidence
In this talk, Dr. Quandt will share results from behavioral and cognitive neuroscience studies from the past few years of her work in the Action & Brain Lab, an EEG lab at Gallaudet University, the world's premiere university for deaf and hard-of-hearing students. These results will center upon the question of how extensive knowledge of signed language changes, and in some cases enhances, people's perception and cognition. Evidence for this effect comes from studies of human biological motion using point light displays, self-report, and studies of action perception. Dr. Quandt will also discuss some of the lab's efforts in designing and testing a virtual reality environment in which users can learn American Sign Language from signing avatars (virtual humans).
Characterising the brain representations behind variations in real-world visual behaviour
Not all individuals are equally competent at recognizing the faces they interact with. Revealing how the brains of different individuals support variations in this ability is a crucial step to develop an understanding of real-world human visual behaviour. In this talk, I will present findings from a large high-density EEG dataset (>100k trials of participants processing various stimulus categories) and computational approaches which aimed to characterise the brain representations behind real-world proficiency of “super-recognizers”—individuals at the top of face recognition ability spectrum. Using decoding analysis of time-resolved EEG patterns, we predicted with high precision the trial-by-trial activity of super-recognizers participants, and showed that evidence for face recognition ability variations is disseminated along early, intermediate and late brain processing steps. Computational modeling of the underlying brain activity uncovered two representational signatures supporting higher face recognition ability—i) mid-level visual & ii) semantic computations. Both components were dissociable in brain processing-time (the first around the N170, the last around the P600) and levels of computations (the first emerging from mid-level layers of visual Convolutional Neural Networks, the last from a semantic model characterising sentence descriptions of images). I will conclude by presenting ongoing analyses from a well-known case of acquired prosopagnosia (PS) using similar computational modeling of high-density EEG activity.
Spatio-temporal large-scale organization of the trimodal connectome derived from concurrent EEG-fMRI and diffusion MRI
While time-averaged dynamics of brain functional connectivity are known to reflect the underlying structural connections, the exact relationship between large-scale function and structure remains an unsolved issue in network neuroscience. Large-scale networks are traditionally observed by correlation of fMRI timecourses, and connectivity of source-reconstructed electrophysiological measures are less prominent. Accessing the brain by using multimodal recordings combining EEG, fMRI and diffusion MRI (dMRI) can help to refine the understanding of the spatio-temporal organization of both static and dynamic brain connectivity. In this talk I will discuss our prior findings that whole-brain connectivity derived from source-reconstructed resting-state (rs) EEG is both linked to the rs-fMRI and dMRI connectome. The EEG connectome provides complimentary information to link function to structure as compared to an fMRI-only perspective. I will present an approach extending the multimodal data integration of concurrent rs-EEG-fMRI to the temporal domain by combining dynamic functional connectivity of both modalities to better understand the neural basis of functional connectivity dynamics. The close relationship between time-varying changes in EEG and fMRI whole-brain connectivity patterns provide evidence for spontaneous reconfigurations of the brain’s functional processing architecture. Finally, I will talk about data quality of connectivity derived from concurrent EEG-fMRI recordings and how the presented multimodal framework could be applied to better understand focal epilepsy. In summary this talk will give an overview of how to integrate large-scale EEG networks with MRI-derived brain structure and function. In conclusion EEG-based connectivity measures not only are closely linked to MRI-based measures of brain structure and function over different time-scales, but also provides complimentary information on the function of underlying brain organization.
Differential working memory functioning
The integrated conflict monitoring theory of Botvinick introduced cognitive demand into conflict monitoring research. We investigated effects of individual differences of cognitive demand and another determinant of conflict monitoring entitled reinforcement sensitivity on conflict monitoring. We showed evidence of differential variability of conflict monitoring intensity using the electroencephalogram (EEG), functional magnet resonance imaging (fMRI) and behavioral data. Our data suggest that individual differences of anxiety and reasoning ability are differentially related to the recruitment of proactive and reactive cognitive control (cf. Braver). Based on previous findings, the team of the Leue-Lab investigated new psychometric data on conflict monitoring and proactive-reactive cognitive control. Moreover, data of the Leue-Lab suggest the relevance of individual differences of conflict monitoring for the context of deception. In this respect, we plan new studies highlighting individual differences of the functioning of the Anterior Cingulate Cortex (ACC). Disentangling the role of individual differences in working memory-related cognitive demand, mental effort, and reinforcement-related processes opens new insights for cognitive-motivational approaches of information processing (Passcode to rewatch: 0R8v&m59).
Investigating visual recognition and the temporal lobes using electrophysiology and fast periodic visual stimulation
The ventral visual pathway extends from the occipital to the anterior temporal regions, and is specialized in giving meaning to objects and people that are perceived through vision. Numerous studies in functional magnetic resonance imaging have focused on the cerebral basis of visual recognition. However, this technique is susceptible to magnetic artefacts in ventral anterior temporal regions and it has led to an underestimation of the role of these regions within the ventral visual stream, especially with respect to face recognition and semantic representations. Moreover, there is an increasing need for implicit methods assessing these functions as explicit tasks lack specificity. In this talk, I will present three studies using fast periodic visual stimulation (FPVS) in combination with scalp and/or intracerebral EEG to overcome these limitations and provide high SNR in temporal regions. I will show that, beyond face recognition, FPVS can be extended to investigate semantic representations using a face-name association paradigm and a semantic categorisation paradigm with written words. These results shed new light on the role of temporal regions and demonstrate the high potential of the FPVS approach as a powerful electrophysiological tool to assess various cognitive functions in neurotypical and clinical populations.
Getting to know you: emerging neural representations during face familiarization
The successful recognition of familiar persons is critical for social interactions. Despite extensive research on the neural representations of familiar faces, we know little about how such representations unfold as someone becomes familiar. In three EEG experiments, we elucidated how representations of face familiarity and identity emerge from different qualities of familiarization: brief perceptual exposure (Experiment 1), extensive media familiarization (Experiment 2) and real-life personal familiarization (Experiment 3). Time-resolved representational similarity analysis revealed that familiarization quality has a profound impact on representations of face familiarity: they were strongly visible after personal familiarization, weaker after media familiarization, and absent after perceptual familiarization. Across all experiments, we found no enhancement of face identity representation, suggesting that familiarity and identity representations emerge independently during face familiarization. Our results emphasize the importance of extensive, real-life familiarization for the emergence of robust face familiarity representations, constraining models of face perception and recognition memory.
EEG coverage
12 items