Latest

SeminarPsychology

Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake

Casey Becker
University of Pittsburgh
Apr 16, 2025

Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.

SeminarPsychology

Comparing supervised learning dynamics: Deep neural networks match human data efficiency but show a generalisation lag

Lukas Huber
University of Bern
Sep 23, 2024

Recent research has seen many behavioral comparisons between humans and deep neural networks (DNNs) in the domain of image classification. Often, comparison studies focus on the end-result of the learning process by measuring and comparing the similarities in the representations of object categories once they have been formed. However, the process of how these representations emerge—that is, the behavioral changes and intermediate stages observed during the acquisition—is less often directly and empirically compared. In this talk, I'm going to report a detailed investigation of the learning dynamics in human observers and various classic and state-of-the-art DNNs. We develop a constrained supervised learning environment to align learning-relevant conditions such as starting point, input modality, available input data and the feedback provided. Across the whole learning process we evaluate and compare how well learned representations can be generalized to previously unseen test data. Comparisons across the entire learning process indicate that DNNs demonstrate a level of data efficiency comparable to human learners, challenging some prevailing assumptions in the field. However, our results also reveal representational differences: while DNNs' learning is characterized by a pronounced generalisation lag, humans appear to immediately acquire generalizable representations without a preliminary phase of learning training set-specific information that is only later transferred to novel data.

SeminarPsychology

Conversations with Caves? Understanding the role of visual psychological phenomena in Upper Palaeolithic cave art making

Izzy Wisher
Aarhus University
Feb 26, 2024

How central were psychological features deriving from our visual systems to the early evolution of human visual culture? Art making emerged deep in our evolutionary history, with the earliest art appearing over 100,000 years ago as geometric patterns etched on fragments of ochre and shell, and figurative representations of prey animals flourishing in the Upper Palaeolithic (c. 40,000 – 15,000 years ago). The latter reflects a complex visual process; the ability to represent something that exists in the real world as a flat, two-dimensional image. In this presentation, I argue that pareidolia – the psychological phenomenon of seeing meaningful forms in random patterns, such as perceiving faces in clouds – was a fundamental process that facilitated the emergence of figurative representation. The influence of pareidolia has often been anecdotally observed in Upper Palaeolithic art examples, particularly cave art where the topographic features of cave wall were incorporated into animal depictions. Using novel virtual reality (VR) light simulations, I tested three hypotheses relating to pareidolia in the caves of Upper Palaeolithic cave art in the caves of Las Monedas and La Pasiega (Cantabria, Spain). To evaluate this further, I also developed an interdisciplinary VR eye-tracking experiment, where participants were immersed in virtual caves based on the cave of El Castillo (Cantabria, Spain). Together, these case studies suggest that pareidolia was an intrinsic part of artist-cave interactions (‘conversations’) that influenced the form and placement of figurative depictions in the cave. This has broader implications for conceiving of the role of visual psychological phenomena in the emergence and development of figurative art in the Palaeolithic.

SeminarPsychology

Characterising Representations of Goal Obstructiveness and Uncertainty Across Behavior, Physiology, and Brain Activity Through a Video Game Paradigm

Mi Xue Tan
University of Geneva
Dec 18, 2023

The nature of emotions and their neural underpinnings remain debated. Appraisal theories such as the component process model propose that the perception and evaluation of events (appraisal) is the key to eliciting the range of emotions we experience. Here we study whether the framework of appraisal theories provides a clearer account for the differentiation of emotional episodes and their functional organisation in the brain. We developed a stealth game to manipulate appraisals in a systematic yet immersive way. The interactive nature of video games heightens self-relevance through the experience of goal-directed action or reaction, evoking strong emotions. We show that our manipulations led to changes in behaviour, physiology and brain activations.

SeminarPsychology

The contribution of mental face representations to individual face processing abilities

Linda Ficco
Friedrich-Schilller Universität Jena
Sep 19, 2023

People largely differ with respect to how well they can learn, memorize, and perceive faces. In this talk, I address two potential sources of variation. One factor might be people’s ability to adapt their perception to the kind of faces they are currently exposed to. For instance, some studies report that those who show larger adaptation effects are also better at performing face learning and memory tasks. Another factor might be people’s sensitivity to perceive fine differences between similar-looking faces. In fact, one study shows that the brain of good performers in a face memory task shows larger neural differences between similar-looking faces. Capitalizing on this body of evidence, I present a behavioural study where I explore the relationship between people’s perceptual adaptability and sensitivity and their individual face processing performance.

SeminarPsychology

The Effects of Negative Emotions on Mental Representation of Faces

Fabiana Lombardi
University of Winchester
Nov 23, 2022

Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Whilst emotional state was found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. Here, we examined the specific perceptual modulation of geometric properties of the mental representations associated with state anxiety and state depression on face detection, and to compare their emotional expression. To this end, we used an adaptation of the reverse correlation technique inspired by Gosselin and Schyns’, (2003) ‘Superstitious Approach’, to construct visual representations of observers’ mental representations of faces and to relate these to their mental states. In two sessions, on separate days, participants were presented with ‘colourful’ noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified as faces, we reconstructed the pictorial mental representation utilised by each participant in each session. We found a significant correlation between the size of the mental representation of faces and participants’ level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations reflect their emotional state, we are conducting a validation study with a group of naïve observers who are asked to classify the reconstructed face images by emotion. Thus, we assess whether the faces communicate participants’ emotional states to others.

SeminarPsychology

Untitled Seminar

Christel Devue
University of Liege
Mar 31, 2022

The nature of facial information that is stored by humans to recognise large amounts of faces is unclear despite decades of research in the field. To complicate matters further, little is known about how representations may evolve as novel faces become familiar, and there are large individual differences in the ability to recognise faces. I will present a theory I am developing and that assumes that facial representations are cost-efficient. In that framework, individual facial representations would incorporate different diagnostic features in different faces, regardless of familiarity, and would evolve depending on the relative stability in appearance over time. Further, coarse information would be prioritised over fine details in order to decrease storage demands. This would create low-cost facial representations that refine over time if appearance changes. Individual differences could partly rest on that ability to refine representation if needed. I will present data collected in the general population and in participants with developmental prosopagnosia. In support of the proposed view, typical observers and those with developmental prosopagnosia seem to rely on coarse peripheral features when they have no reason to expect someone’s appearance will change in the future.

SeminarPsychology

Computational Models of Fine-Detail and Categorical Information in Visual Working Memory: Unified or Separable Representations?

Timothy J Ricker
University of South Dakota
Nov 22, 2021

When we remember a stimulus we rarely maintain a full fidelity representation of the observed item. Our working memory instead maintains a mixture of the observed feature values and categorical/gist information. I will discuss evidence from computational models supporting a mix of categorical and fine-detail information in working memory. Having established the need for two memory formats in working memory, I will discuss whether categorical and fine-detailed information for a stimulus are represented separately or as a single unified representation. Computational models of these two potential cognitive structures make differing predictions about the pattern of responses in visual working memory recall tests. The present study required participants to remember the orientation of stimuli for later reproduction. The pattern of responses are used to test the competing representational structures and to quantify the relative amount of fine-detailed and categorical information maintained. The effects of set size, encoding time, serial order, and response order on memory precision, categorical information, and guessing rates are also explored. (This is a 60 min talk).

SeminarPsychology

The diachronic account of attentional selectivity

Alon Zivony
Birbeck University of London
Oct 21, 2021

Many models of attention assume that attentional selection takes place at a specific moment in time which demarcates the critical transition from pre-attentive to attentive processing of sensory input. We argue that this intuitively appealing account is not only inaccurate, but has led to substantial conceptual confusion (to the point where some attention researchers offer to abandon the term ‘attention’ altogether). As an alternative, we offer a “diachronic” framework that describes attentional selectivity as a process that unfolds over time. Key to this view is the concept of attentional episodes, brief periods of intense attentional amplification of sensory representations that regulate access to working memory and response-related processes. We describe how attentional episodes are linked to earlier attentional mechanisms and to recurrent processing at the neural level. We present data showing that multiple sequential events can be involuntarily encoded in working memory when they appear during the same attentional episode, whether they are relevant or not. We also discuss the costs associated with processing multiple events within a single episode. Finally, we argue that breaking down the dichotomy between pre-attentive and attentive (as well as early vs. late selection) offers new solutions to old problems in attention research that have never been resolved. It can provide a unified and conceptually coherent account of the network of cognitive and neural processes that produce the goal-directed selectivity in perceptual processing that is commonly referred to as “attention”.

SeminarPsychology

What are the consequences of directing attention within working memory?

Evie Vergauwe
University of Geneva
Oct 8, 2021

The role of attention in working memory remains controversial, but there is some agreement on the notion that the focus of attention holds mnemonic representations in a privileged state of heightened accessibility in working memory, resulting in better memory performance for items that receive focused attention during retention. Closely related, representations held in the focus of attention are often observed to be robust and protected from degradation caused by either perceptual interference (e.g., Makovski & Jiang, 2007; van Moorselaar et al., 2015) or decay (e.g., Barrouillet et al., 2007). Recent findings indicate, however, that representations held in the focus of attention are particularly vulnerable to degradation, and thus, appear to be particularly fragile rather than robust (e.g., Hitch et al., 2018; Hu et al., 2014). The present set of experiments aims at understanding the apparent paradox of information in the focus of attention having a protected vs. vulnerable status in working memory. To that end, we examined the effect of perceptual interference on memory performance for information that was held within vs. outside the focus of attention, across different ways of bringing items in the focus of attention and across different time scales.

SeminarPsychology

Age-related dedifferentiation across representational levels and their relation to memory performance

Malte Kobelt
Ruhr-University Bochum
Oct 7, 2021

Episodic memory performance decreases with advancing age. According to theoretical models, such memory decline might be a consequence of age-related reductions in the ability to form distinct neural representations of our past. In this talk, I want to present our new age-comparative fMRI study investigating age-related neural dedifferentiation across different representational levels. By combining univariate analyses and searchlight pattern similarity analyses, we found that older adults show reduced category selective processing in higher visual areas, less specific item representations in occipital regions and less stable item representations. Dedifferentiation on all these representational levels was related to memory performance, with item specificity being the strongest contributor. Overall, our results emphasize that age-related dedifferentiation can be observed across the entire cortical hierarchy which may selectively impair memory performance depending on the memory task.

SeminarPsychology

Statistical Summary Representations in Identity Learning: Exemplar-Independent Incidental Recognition

Yaren Koca
University of Regina
Aug 26, 2021

The literature suggests that ensemble coding, the ability to represent the gist of sets, may be an underlying mechanism for becoming familiar with newly encountered faces. This phenomenon was investigated by introducing a new training paradigm that involves incidental learning of target identities interspersed among distractors. The effectiveness of this training paradigm was explored in Study 1, which revealed that unfamiliar observers who learned the faces incidentally performed just as well as the observers who were instructed to learn the faces, and the intervening distractors did not disrupt familiarization. Using the same training paradigm, ensemble coding was investigated as an underlying mechanism for face familiarization in Study 2 by measuring familiarity with the targets at different time points using average images created either by seen or unseen encounters of the target. The results revealed that observers whose familiarity was tested using seen averages outperformed the observers who were tested using unseen averages, however, this discrepancy diminished over time. In other words, successful recognition of the target faces became less reliant on the previously encountered exemplars over time, suggesting an exemplar-independent representation that is likely achieved through ensemble coding. Taken together, the results from the current experiment provide direct evidence for ensemble coding as a viable underlying mechanism for face familiarization, that faces that are interspersed among distractors can be learned incidentally.

SeminarPsychology

Characterising the brain representations behind variations in real-world visual behaviour

Simon Faghel-Soubeyrand
Université de Montréal
Aug 5, 2021

Not all individuals are equally competent at recognizing the faces they interact with. Revealing how the brains of different individuals support variations in this ability is a crucial step to develop an understanding of real-world human visual behaviour. In this talk, I will present findings from a large high-density EEG dataset (>100k trials of participants processing various stimulus categories) and computational approaches which aimed to characterise the brain representations behind real-world proficiency of “super-recognizers”—individuals at the top of face recognition ability spectrum. Using decoding analysis of time-resolved EEG patterns, we predicted with high precision the trial-by-trial activity of super-recognizers participants, and showed that evidence for face recognition ability variations is disseminated along early, intermediate and late brain processing steps. Computational modeling of the underlying brain activity uncovered two representational signatures supporting higher face recognition ability—i) mid-level visual & ii) semantic computations. Both components were dissociable in brain processing-time (the first around the N170, the last around the P600) and levels of computations (the first emerging from mid-level layers of visual Convolutional Neural Networks, the last from a semantic model characterising sentence descriptions of images). I will conclude by presenting ongoing analyses from a well-known case of acquired prosopagnosia (PS) using similar computational modeling of high-density EEG activity.

SeminarPsychology

Memory for Latent Representations: An Account of Working Memory that Builds on Visual Knowledge for Efficient and Detailed Visual Representations

Brad Wyble
Penn State University
Jul 7, 2021

Visual knowledge obtained from our lifelong experience of the world plays a critical role in our ability to build short-term memories. We propose a mechanistic explanation of how working memory (WM) representations are built from the latent representations of visual knowledge and can then be reconstructed. The proposed model, Memory for Latent Representations (MLR), features a variational autoencoder with an architecture that corresponds broadly to the human visual system and an activation-based binding pool of neurons that binds items’ attributes to tokenized representations. The simulation results revealed that shape information for stimuli that the model was trained on, can be encoded and retrieved efficiently from latents in higher levels of the visual hierarchy. On the other hand, novel patterns that are completely outside the training set can be stored from a single exposure using only latents from early layers of the visual system. Moreover, the representation of a given stimulus can have multiple codes, representing specific visual features such as shape or color, in addition to categorical information. Finally, we validated our model by testing a series of predictions against behavioral results acquired from WM tasks. The model provides a compelling demonstration of visual knowledge yielding the formation of compact visual representation for efficient memory encoding.

SeminarPsychology

Investigating visual recognition and the temporal lobes using electrophysiology and fast periodic visual stimulation

Angelique Volfart
University of Louvain
Jun 24, 2021

The ventral visual pathway extends from the occipital to the anterior temporal regions, and is specialized in giving meaning to objects and people that are perceived through vision. Numerous studies in functional magnetic resonance imaging have focused on the cerebral basis of visual recognition. However, this technique is susceptible to magnetic artefacts in ventral anterior temporal regions and it has led to an underestimation of the role of these regions within the ventral visual stream, especially with respect to face recognition and semantic representations. Moreover, there is an increasing need for implicit methods assessing these functions as explicit tasks lack specificity. In this talk, I will present three studies using fast periodic visual stimulation (FPVS) in combination with scalp and/or intracerebral EEG to overcome these limitations and provide high SNR in temporal regions. I will show that, beyond face recognition, FPVS can be extended to investigate semantic representations using a face-name association paradigm and a semantic categorisation paradigm with written words. These results shed new light on the role of temporal regions and demonstrate the high potential of the FPVS approach as a powerful electrophysiological tool to assess various cognitive functions in neurotypical and clinical populations.

SeminarPsychology

Visual working memory representations are distorted by their use in perceptual comparisons

Keisuke Fukuda
University of Toronto Mississauga, University of Toronto
Jun 22, 2021

Visual working memory (VWM) allows us to maintain a small amount of task-relevant information in mind so that we can use them to guide our behavior. Although past studies have successfully characterized its capacity limit and representational quality during maintenance, the consequence of its usage for task-relevant behaviors has been largely unknown. In this talk, I will demonstrate that VWM representations get distorted when they are used for perceptual comparisons with new visual inputs, especially when the inputs are subjectively similar to the VWM representations. Furthermore, I will show that this similarity-induced memory bias (SIMB) occurs for both simple (e.g. , color, shape) and complex stimuli (e.g., real world objects, faces) that are perceptually encoded and retrieved from long-term memory. Given the observed versatility of the SIMB, its implication for other memory distortion phenomena (e.g., distractor-induced distortion, misinformation effect) will be discussed.

SeminarPsychology

Perception, attention, visual working memory, and decision making: The complete consort dancing together

Philip Smith
The University of Melbourne
Jun 17, 2021

Our research investigates how processes of attention, visual working memory (VWM), and decision-making combine to translate perception into action. Within this framework, the role of VWM is to form stable representations of transient stimulus events that allow them to be identified by a decision process, which we model as a diffusion process. In psychophysical tasks, we find the capacity of VWM is well defined by a sample-size model, which attributes changes in VWM precision with set-size to differences in the number evidence samples recruited to represent stimuli. In the first part of the talk, I review evidence for the sample-size model and highlight the model's strengths: It provides a parameter-free characterization of the set-size effect; it has plausible neural and cognitive interpretations; an attention-weighted version of the model accounts for the power-law of VWM, and it accounts for the selective updating of VWM in multiple-look experiments. In the second part of the talk, I provide a characterization of the theoretical relationship between two-choice and continuous-outcome decision tasks using the circular diffusion model, highlighting their common features. I describe recent work characterizing the joint distributions of decision outcomes and response times in continuous-outcome tasks using the circular diffusion model and show that the model can clearly distinguish variable-precision and simple mixture models of the evidence entering the decision process. The ability to distinguish these kinds of processes is central to current VWM studies.

SeminarPsychology

Getting to know you: emerging neural representations during face familiarization

Gyula Kovács
Friedrich-Schiller University Jena
Jun 17, 2021

The successful recognition of familiar persons is critical for social interactions. Despite extensive research on the neural representations of familiar faces, we know little about how such representations unfold as someone becomes familiar. In three EEG experiments, we elucidated how representations of face familiarity and identity emerge from different qualities of familiarization: brief perceptual exposure (Experiment 1), extensive media familiarization (Experiment 2) and real-life personal familiarization (Experiment 3). Time-resolved representational similarity analysis revealed that familiarization quality has a profound impact on representations of face familiarity: they were strongly visible after personal familiarization, weaker after media familiarization, and absent after perceptual familiarization. Across all experiments, we found no enhancement of face identity representation, suggesting that familiarity and identity representations emerge independently during face familiarization. Our results emphasize the importance of extensive, real-life familiarization for the emergence of robust face familiarity representations, constraining models of face perception and recognition memory.

representations coverage

21 items

Seminar21
Domain spotlight

Explore how representations research is advancing inside Psychology.

Visit domain