Research
Latest
Steven M. Weisberg
The Department of PSYCHOLOGY at the UNIVERSITY OF FLORIDA, College of Liberal Arts and Sciences, invites applications for a full-time, nine-month, tenure-accruing, OPEN-AREA Assistant Professor with special emphasis in QUANTITATIVE METHODS, beginning August 16, 2024. We encourage applications from any research orientation in psychology and the position is open to candidates who employ a wide variety of methodological tools or approaches (including, but not limited to, computational modeling, statistics, artificial intelligence, structural equation modeling, multilevel modeling, network analysis, and longitudinal data analysis). Applicants will be expected to maintain an outstanding program of research with high potential for external funding, teach psychology graduate and undergraduate courses, advise students, and provide service to the institution.
A personal journey on understanding intelligence
The focus of this talk is not about my research in AI or Robotics but my own journey on trying to do research and understand intelligence in a rapidly evolving research landscape. I will trace my path from conducting early-stage research during graduate school, to working on practical solutions within a startup environment, and finally to my current role where I participate in more structured research at a major tech company. Through these varied experiences, I will provide different perspectives on research and talk about how my core beliefs on intelligence have changed and sometimes even been compromised. There are no lessons to be learned from my stories, but hopefully they will be entertaining.
FLUXSynID: High-Resolution Synthetic Face Generation for Document and Live Capture Images
Synthetic face datasets are increasingly used to overcome the limitations of real-world biometric data, including privacy concerns, demographic imbalance, and high collection costs. However, many existing methods lack fine-grained control over identity attributes and fail to produce paired, identity-consistent images under structured capture conditions. In this talk, I will present FLUXSynID, a framework for generating high-resolution synthetic face datasets with user-defined identity attribute distributions and paired document-style and trusted live capture images. The dataset generated using FLUXSynID shows improved alignment with real-world identity distributions and greater diversity compared to prior work. I will also discuss how FLUXSynID’s dataset and generation tools can support research in face recognition and morphing attack detection (MAD), enhancing model robustness in both academic and practical applications.
An Ecological and Objective Neural Marker of Implicit Unfamiliar Identity Recognition
We developed a novel paradigm measuring implicit identity recognition using Fast Periodic Visual Stimulation (FPVS) with EEG among 16 students and 12 police officers with normal face processing abilities. Participants' neural responses to a 1-Hz tagged oddball identity embedded within a 6-Hz image stream revealed implicit recognition with high-quality mugshots but not CCTV-like images, suggesting optimal resolution requirements. Our findings extend previous research by demonstrating that even unfamiliar identities can elicit robust neural recognition signatures through brief, repeated passive exposure. This approach offers potential for objective validation of face processing abilities in forensic applications, including assessment of facial examiners, Super-Recognisers, and eyewitnesses, potentially overcoming limitations of traditional behavioral assessment methods.
Deepfake emotional expressions trigger the uncanny valley brain response, even when they are not recognised as fake
Facial expressions are inherently dynamic, and our visual system is sensitive to subtle changes in their temporal sequence. However, researchers often use dynamic morphs of photographs—simplified, linear representations of motion—to study the neural correlates of dynamic face perception. To explore the brain's sensitivity to natural facial motion, we constructed a novel dynamic face database using generative neural networks, trained on a verified set of video-recorded emotional expressions. The resulting deepfakes, consciously indistinguishable from videos, enabled us to separate biological motion from photorealistic form. Results showed that conventional dynamic morphs elicit distinct responses in the brain compared to videos and photos, suggesting they violate expectations (n400) and have reduced social salience (late positive potential). This suggests that dynamic morphs misrepresent facial dynamism, resulting in misleading insights about the neural and behavioural correlates of face perception. Deepfakes and videos elicited largely similar neural responses, suggesting they could be used as a proxy for real faces in vision research, where video recordings cannot be experimentally manipulated. And yet, despite being consciously undetectable as fake, deepfakes elicited an expectation violation response in the brain. This points to a neural sensitivity to naturalistic facial motion, beyond conscious awareness. Despite some differences in neural responses, the realism and manipulability of deepfakes make them a valuable asset for research where videos are unfeasible. Using these stimuli, we proposed a novel marker for the conscious perception of naturalistic facial motion – Frontal delta activity – which was elevated for videos and deepfakes, but not for photos or dynamic morphs.
PhenoSign - Molecular Dynamic Insights
Do You Know Your Blood Glucose Level? You Probably Should! A single measurement is not enough to truly understand your metabolic health. Blood glucose levels fluctuate dynamically, and meaningful insights require continuous monitoring over time. But glucose is just one example. Many other molecular concentrations in the body are not static. Their variations are influenced by individual physiology and overall health. PhenoSign, a Swiss MedTech startup, is on a mission to become the leader in real-time molecular analysis of complex fluids, supporting clinical decision-making and life sciences applications. By providing real-time, in-situ molecular insights, we aim to advance medicine and transform life sciences research. This talk will provide an overview of PhenoSign’s journey since its inception in 2022—our achievements, challenges, and the strategic roadmap we are executing to shape the future of real-time molecular diagnostics.
Comparing supervised learning dynamics: Deep neural networks match human data efficiency but show a generalisation lag
Recent research has seen many behavioral comparisons between humans and deep neural networks (DNNs) in the domain of image classification. Often, comparison studies focus on the end-result of the learning process by measuring and comparing the similarities in the representations of object categories once they have been formed. However, the process of how these representations emerge—that is, the behavioral changes and intermediate stages observed during the acquisition—is less often directly and empirically compared. In this talk, I'm going to report a detailed investigation of the learning dynamics in human observers and various classic and state-of-the-art DNNs. We develop a constrained supervised learning environment to align learning-relevant conditions such as starting point, input modality, available input data and the feedback provided. Across the whole learning process we evaluate and compare how well learned representations can be generalized to previously unseen test data. Comparisons across the entire learning process indicate that DNNs demonstrate a level of data efficiency comparable to human learners, challenging some prevailing assumptions in the field. However, our results also reveal representational differences: while DNNs' learning is characterized by a pronounced generalisation lag, humans appear to immediately acquire generalizable representations without a preliminary phase of learning training set-specific information that is only later transferred to novel data.
How to tell if someone is hiding something from you? An overview of the scientific basis of deception and concealed information detection
I my talk I will give an overview of recent research on deception and concealed information detection. I will start with a short introduction on the problems and shortcomings of traditional deception detection tools and why those still prevail in many recent approaches (e.g., in AI-based deception detection). I want to argue for the importance of more fundamental deception research and give some examples for insights gained therefrom. In the second part of the talk, I will introduce the Concealed Information Test (CIT), a promising paradigm for research and applied contexts to investigate whether someone actually recognizes information that they do not want to reveal. The CIT is based on solid scientific theory and produces large effects sizes in laboratory studies with a number of different measures (e.g., behavioral, psychophysiological, and neural measures). I will highlight some challenges a forensic application of the CIT still faces and how scientific research could assist in overcoming those.
Exploring Lifespan Memory Development and Intervention Strategies for Memory Decline through a Unified Model-Based Assessment
Understanding and potentially reversing memory decline necessitates a comprehensive examination of memory's evolution throughout life. Traditional memory assessments, however, suffer from a lack of comparability across different age groups due to the diverse nature of the tests employed. Addressing this gap, our study introduces a novel, ACT-R model-based memory assessment designed to provide a consistent metric for evaluating memory function across a lifespan, from 5 to 85-year-olds. This approach allows for direct comparison across various tasks and materials tailored to specific age groups. Our findings reveal a pronounced U-shaped trajectory of long-term memory function, with performance at age 5 mirroring those observed in elderly individuals with impairments, highlighting critical periods of memory development and decline. Leveraging this unified assessment method, we further investigate the therapeutic potential of rs-fMRI-guided TBS targeting area 8AV in individuals with early-onset Alzheimer’s Disease—a region implicated in memory deterioration and mood disturbances in this population. This research not only advances our understanding of memory's lifespan dynamics but also opens new avenues for targeted interventions in Alzheimer’s Disease, marking a significant step forward in the quest to mitigate memory decay.
Enabling witnesses to actively explore faces and reinstate study-test pose during a lineup increases discrimination accuracy
In 2014, the US National Research Council called for the development of new lineup technologies to increase eyewitness identification accuracy (National Research Council, 2014). In a police lineup, a suspect is presented alongside multiple individuals known to be innocent who resemble the suspect in physical appearance know as fillers. A correct identification decision by an eyewitness can lead to a guilty suspect being convicted or an innocent suspect being exonerated from suspicion. An incorrect decision can result in the perpetrator remaining at large, or even a wrongful conviction of a mistakenly identified person. Incorrect decisions carry considerable human and financial costs, so it is essential to develop and enact lineup procedures that maximise discrimination accuracy, or the witness’ ability to distinguish guilty from innocent suspects. This talk focuses on new technology and innovation in the field of eyewitness identification. We will focus on the interactive lineup, which is a procedure that we developed based on research and theory from the basic science literature on face perception and recognition. The interactive lineup enables witnesses to actively explore and dynamically view the lineup members. The procedure has been shown to maximize discrimination accuracy, which is the witness’ ability to discriminate guilty from innocent suspects. The talk will conclude by reflecting on emerging technological frontiers and research opportunities.
Where Cognitive Neuroscience Meets Industry: Navigating the Intersections of Academia and Industry
In this talk, Mirta will share her journey from her education a mathematically-focused high school to her currently unconventional career in London, emphasizing the evolution from a local education in Croatia to international experiences in the US and UK. We will explore the concept of interdisciplinary careers in the modern world, viewing them through the framework of increasing demand, flexibility, and dynamism in the current workplace. We will underscore the significance of interdisciplinary research for launching careers outside of academia, and bolstering those within. I will challenge the conventional norm of working either in academia or industry, and encourage discussion about the opportunities for combining the two in a myriad of career opportunities. I’ll use examples from my own and others’ research to highlight opportunities for early career researchers to extend their work into practical applications. Such an approach leverages the strengths of both sectors, fostering innovation and practical applications of research findings. I hope these insights can offer valuable perspectives for those looking to navigate the evolving demands of the global job market, illustrating the advantages of a versatile skill set that spans multiple disciplines and allows extensions into exciting career options.
10 “simple rules” for socially responsible science
Guidelines concerning the potentially harmful effects of scientific studies have historically focused on minimizing risk for participants. However, studies can also indirectly inflict harm on individuals and social groups through how they are designed, reported, and disseminated. As evidenced by recent criticisms and retractions of high-profile studies dealing with a wide variety of social issues, there is a scarcity of resources and guidance on how one can conduct research in a socially responsible manner. As such, even motivated researchers might publish work that has negative social impacts due to a lack of awareness. To address this, we proposed 10 recommendations (“simple rules”) for researchers who wish to conduct more socially responsible science. These recommendations cover major considerations throughout the life cycle of a study from inception to dissemination. They are not aimed to be a prescriptive list or a deterministic code of conduct. Rather, they are meant to help motivated scientists to reflect on their social responsibility as researchers and actively engage with the potential social impact of their research.
Enhancing Qualitative Coding with Large Language Models: Potential and Challenges
Qualitative coding is the process of categorizing and labeling raw data to identify themes, patterns, and concepts within qualitative research. This process requires significant time, reflection, and discussion, often characterized by inherent subjectivity and uncertainty. Here, we explore the possibility to leverage large language models (LLM) to enhance the process and assist researchers with qualitative coding. LLMs, trained on extensive human-generated text, possess an architecture that renders them capable of understanding the broader context of a conversation or text. This allows them to extract patterns and meaning effectively, making them particularly useful for the accurate extraction and coding of relevant themes. In our current approach, we employed the chatGPT 3.5 Turbo API, integrating it into the qualitative coding process for data from the SWISS100 study, specifically focusing on data derived from centenarians' experiences during the Covid-19 pandemic, as well as a systematic centenarian literature review. We provide several instances illustrating how our approach can assist researchers with extracting and coding relevant themes. With data from human coders on hand, we highlight points of convergence and divergence between AI and human thematic coding in the context of these data. Moving forward, our goal is to enhance the prototype and integrate it within an LLM designed for local storage and operation (LLaMa). Our initial findings highlight the potential of AI-enhanced qualitative coding, yet they also pinpoint areas requiring attention. Based on these observations, we formulate tentative recommendations for the optimal integration of LLMs in qualitative coding research. Further evaluations using varied datasets and comparisons among different LLMs will shed more light on the question of whether and how to integrate these models into this domain.
Touch in romantic relationships
Responsive behavior is crucial to relationship quality and well-being across a variety of interpersonal domains. In this talk I will share research from studies in which we investigate how responsiveness is conveyed nonverbally in the context of male friendships and in heterosexual romantic relationships, largely focusing on affectionate touch as a nonverbal signal of understanding, validation, and care
Representational Connectivity Analysis (RCA): a Method for Investigating Flow of Content-Specific Information in the Brain
Representational Connectivity Analysis (RCA) has gained mounting interest in the past few years. This is because, rather than conventional tracking of signal, RCA allows for the tracking of information across the brain. It can also provide insights into the content and potential transformations of the transferred information. This presentation explains several variations of the method in terms of implementation and how it can be adopted for different modalities (E/MEG and fMRI). I will also present caveats and nuances of the method which should be considered when using the RCA.
Brain and Behavior: Employing Frequency Tagging as a Tool for Measuring Cognitive Abilities
Frequency tagging based on fast periodic visual stimulation (FPVS) provides a window into ongoing visual and cognitive processing and can be leveraged to measure rule learning and high-level categorization. In this talk, I will present data demonstrating highly proficient categorization as living and non-living in preschool children, and characterize the development of this ability during infancy. In addition to associating cognitive functions with development, an intriguing question is whether frequency tagging also captures enduring individual differences, e.g. in general cognitive abilities. First studies indicate high psychometric quality of FPVS categorization responses (XU et al., Dzhelyova), providing a basis for research on individual differences. I will present results from a pilot study demonstrating high correlations between FPVS categorization responses and behavioral measures of processing speed and fluid intelligences. Drawing upon this first evidence, I will discuss the potential of frequency tagging for diagnosing cognitive functions across development.
How AI is advancing Clinical Neuropsychology and Cognitive Neuroscience
This talk aims to highlight the immense potential of Artificial Intelligence (AI) in advancing the field of psychology and cognitive neuroscience. Through the integration of machine learning algorithms, big data analytics, and neuroimaging techniques, AI has the potential to revolutionize the way we study human cognition and brain characteristics. In this talk, I will highlight our latest scientific advancements in utilizing AI to gain deeper insights into variations in cognitive performance across the lifespan and along the continuum from healthy to pathological functioning. The presentation will showcase cutting-edge examples of AI-driven applications, such as deep learning for automated scoring of neuropsychological tests, natural language processing to characeterize semantic coherence of patients with psychosis, and other application to diagnose and treat psychiatric and neurological disorders. Furthermore, the talk will address the challenges and ethical considerations associated with using AI in psychological research, such as data privacy, bias, and interpretability. Finally, the talk will discuss future directions and opportunities for further advancements in this dynamic field.
What's wrong with the prosopagnosia literature? A new approach to diagnosing and researching the condition
Developmental prosopagnosia is characterised by severe, lifelong difficulties when recognising facial identity. Most researchers require prosopagnosia cases exhibit ultra-conservative levels of impairment on the Cambridge Face Memory Test before they include them in their experiments. This results in the majority of people who believe that they have this condition being excluded from the scientific literature. In this talk I outline the many issues that will afflict prosopagnosia research if this continues, and show that these excluded cases do exhibit impairments on all commonly used diagnostic tests when a group-based method of assessment is utilised. I propose a paradigm shift away from cognitive task-based approaches to diagnosing prosopagnosia, and outline a new way that researchers can investigate this condition.
Adaptation via innovation in the animal kingdom
Over the course of evolution, the human race has achieved a number of remarkable innovations, that have enabled us to adapt to and benefit from the environment ever more effectively. The ongoing environmental threats and health disasters of our world have now made it crucial to understand the cognitive mechanisms behind innovative behaviours. In my talk, I will present two research projects with examples of innovation-based behavioural adaptation from the taxonomic kingdom of animals, serving as a comparative psychological model for mapping the evolution of innovation. The first project focuses on the challenge of overcoming physical disability. In this study, we investigated an injured kea (Nestor notabilis) that exhibits an efficient, intentional, and innovative tool-use behaviour to compensate his disability, showing evidence for innovation-based adaptation to a physical disability in a non-human species. The second project focuses on the evolution of fire use from a cognitive perspective. Fire has been one of the most dominant ecological forces in human evolution; however, it is still unknown what capabilities and environmental factors could have led to the emergence of fire use. In the core study of this project, we investigated a captive population of Japanese macaques (Macaca fuscata) that has been regularly exposed to campfires during the cold winter months for over 60 years. Our results suggest that macaques are able to take advantage of the positive effects of fire while avoiding the dangers of flames and hot ashes, and exhibit calm behaviour around the bonfire. In addition, I will present a research proposal targeting the foraging behaviour of predatory birds in parts of Australia frequently affected by bushfires. Anecdotal reports suggest that some birds use burning sticks to spread the flames, a behaviour that has not been scientifically observed and evaluated. In summary, the two projects explore innovative behaviours along three different species groups, three different habitats, and three different ecological drivers, providing insights into the cognitive and behavioural mechanisms of adaptation through innovation.
The Effects of Negative Emotions on Mental Representation of Faces
Face detection is an initial step of many social interactions involving a comparison between a visual input and a mental representation of faces, built from previous experience. Whilst emotional state was found to affect the way humans attend to faces, little research has explored the effects of emotions on the mental representation of faces. Here, we examined the specific perceptual modulation of geometric properties of the mental representations associated with state anxiety and state depression on face detection, and to compare their emotional expression. To this end, we used an adaptation of the reverse correlation technique inspired by Gosselin and Schyns’, (2003) ‘Superstitious Approach’, to construct visual representations of observers’ mental representations of faces and to relate these to their mental states. In two sessions, on separate days, participants were presented with ‘colourful’ noise stimuli and asked to detect faces, which they were told were present. Based on the noise fragments that were identified as faces, we reconstructed the pictorial mental representation utilised by each participant in each session. We found a significant correlation between the size of the mental representation of faces and participants’ level of depression. Our findings provide a preliminary insight about the way emotions affect appearance expectation of faces. To further understand whether the facial expressions of participants’ mental representations reflect their emotional state, we are conducting a validation study with a group of naïve observers who are asked to classify the reconstructed face images by emotion. Thus, we assess whether the faces communicate participants’ emotional states to others.
Do we measure what we think we are measuring?
Tests used in the empirical sciences are often (implicitly) assumed to be representative of a target mechanism in the sense that similar tests should lead to similar results. In this talk, using resting-state electroencephalogram (EEG) as an example, I will argue that this assumption does not necessarily hold true. Typically EEG studies are conducted selecting one analysis method thought to be representative of the research question asked. Using multiple methods, we extracted a variety of features from a single resting-state EEG dataset and conducted correlational and case-control analyses. We found that many EEG features revealed a significant effect in the case-control analyses. Similarly, EEG features correlated significantly with cognitive tasks. However, when we compared these features pairwise, we did not find strong correlations. A number of explanations to these results will be discussed.
The role of top-down mechanisms in gaze perception
Humans, as a social species, have an increased ability to detect and perceive visual elements involved in social exchanges, such as faces and eyes. The gaze, in particular, conveys information crucial for social interactions and social cognition. Researchers have hypothesized that in order to engage in dynamic face-to-face communication in real time, our brains must quickly and automatically process the direction of another person's gaze. There is evidence that direct gaze improves face encoding and attention capture and that direct gaze is perceived and processed more quickly than averted gaze. These results are summarized as the "direct gaze effect". However, in the recent literature, there is evidence to suggest that the mode of visual information processing modulates the direct gaze effect. In this presentation, I argue that top-down processing, and specifically the relevance of eye features to the task, promotes the early preferential processing of direct versus indirect gaze. On the basis of several recent evidences, I propose that low task relevance of eye features will prevent differences in eye direction processing between gaze directions because its encoding will be superficial. Differential processing of direct and indirect gaze will only occur when the eyes are relevant to the task. To assess the implication of task relevance on the temporality of cognitive processing, we will measure event-related potentials (ERPs) in response to facial stimuli. In this project, instead of typical ERP markers such as P1, N170 or P300, we will measure lateralized ERPs (lERPS) such as lateralized N170 and N2pc, which are markers of early face encoding and attentional deployment respectively. I hypothesize that the relevance of the eye feature task is crucial in the direct gaze effect and propose to revisit previous studies, which had questioned the existence of the direct gaze effect. This claim will be illustrate with different past studies and recent preliminary data of my lab. Overall, I propose a systematic evaluation of the role of top-down processing in early direct gaze perception in order to understand the impact of context on gaze perception and, at a larger scope, on social cognition.
Heading perception in crowded environments
Self-motion through a visual world creates a pattern of expanding visual motion called optic flow. Heading estimation from the optic flow is accurate in rigid environments. But it becomes challenging when other humans introduce an independent motion to the scene. The biological motion of human walkers consists of translation through space and associated limb articulation. The characteristic motion pattern is regular, though complex. A world full of humans moving around is nonrigid, causing heading errors. But limb articulation alone does not perturb the global structure of the flow field, matching the rigidity assumption. For heading perception from optic flow analysis, limb articulation alone should not impair heading estimates. But we observed heading biases when participants encountered a group of point-light walkers. Our research investigates the interactions between optic flow perception and biological motion perception. We further analyze the impact of environmental information.
Emotions and Partner Phubbing: The Role of Understanding and Validation in Predicting Anger and Loneliness
Interactions between romantic partners may be disturbed by problematic mobile phone use, i.e., phubbing. Research shows that phubbing reduces the ability to be responsive, but emotional aspects of phubbing, such as experiences of anger and loneliness, have not been explored. Anger has been linked to partner blame in negative social interactions, whereas loneliness has been associated with low social acceptance. Moreover, two aspects of partner responsiveness, understanding and validation, refer to the ability to recognize partner’s perspective and convey acceptance of their point of view, respectively. High understanding and validation by partner have been found to prevent from negative affect during social interaction. The impact of understanding and validation on emotions has not been investigated in the context of phubbing, therefore we posit the following exploratory hypotheses. (1) Participants will report higher levels of anger and loneliness on days with phubbing by partner, compared to days without; (2) understanding and validation will moderate the relationship between phubbing intensity and levels of anger and loneliness. We conducted a daily diary study over seven days. Based on a sample of 133 participants in intimate relationships and living with their partners, we analyzed the nested within and between-person data using multilevel models. Participants reported higher levels of anger and loneliness on days they experienced phubbing. Both, understanding and validation, buffer the relationship between phubbing intensity and negative experiences, and the interaction effects indicate certain nuances between the two constructs. Our research provides a unique insight into how specific mechanisms related to couple interactions may explain experiences of anger and loneliness.
Forensic use of face recognition systems for investigation
With the increasing development of automatic systems and artificial intelligence, face recognition is becoming increasingly important in forensic and civil contexts. However, face recognition has yet to be thoroughly empirically studied to provide an adequate scientific and legal framework for investigative and court purposes. This observation sets the foundation for the research. We focus on issues related to face images and the use of automatic systems. Our objective is to validate a likelihood ratio computation methodology for interpreting comparison scores from automatic face recognition systems (score-based likelihood ratio, SLR). We collected three types of traces: portraits (ID), video surveillance footage recorded by ATM and by a wide-angle camera (CCTV). The performance of two automatic face recognition systems is compared: the commercial IDEMIA Morphoface (MFE) system and the open source FaceNet algorithm.
Untitled Seminar
The nature of facial information that is stored by humans to recognise large amounts of faces is unclear despite decades of research in the field. To complicate matters further, little is known about how representations may evolve as novel faces become familiar, and there are large individual differences in the ability to recognise faces. I will present a theory I am developing and that assumes that facial representations are cost-efficient. In that framework, individual facial representations would incorporate different diagnostic features in different faces, regardless of familiarity, and would evolve depending on the relative stability in appearance over time. Further, coarse information would be prioritised over fine details in order to decrease storage demands. This would create low-cost facial representations that refine over time if appearance changes. Individual differences could partly rest on that ability to refine representation if needed. I will present data collected in the general population and in participants with developmental prosopagnosia. In support of the proposed view, typical observers and those with developmental prosopagnosia seem to rely on coarse peripheral features when they have no reason to expect someone’s appearance will change in the future.
Black Excellence in Psychology
Ruth Winifred Howard (March 25, 1900 – February 12, 1997) was one of the first African-American women to earn a Ph.D. in Psychology. Her research focused on children with special needs. Join us as we celebrate her birthday anniversary with 5 distinguished Psychologists.
Leadership Support and Workplace Psychosocial Stressors
Research evidence indicates that psychosocial stressors such as work-life stress serves as a negative occupational exposure relating to poor health behaviors including smoking, poor food choices, low levels of exercise, and even decreased sleep time, as well as a number of chronic health outcomes. The association between work-life stress and adverse health behaviors and chronic health suggests that Occupational Health Psychology (OHP) interventions such as leadership support trainings may be helpful in mitigating effects of work-life stress and improving health, consistent with the Total Worker Health approach. This presentation will review workplace psychosocial stressors and leadership training approaches to reduces stress and improve health, highlighting a randomized controlled trial, the Military Employee Sleep and Health study.
Consistency of Face Identity Processing: Basic & Translational Research
Previous work looking at individual differences in face identity processing (FIP) has found that most commonly used lab-based performance assessments are unfortunately not sufficiently sensitive on their own for measuring performance in both the upper and lower tails of the general population simultaneously. So more recently, researchers have begun incorporating multiple testing procedures into their assessments. Still, though, the growing consensus seems to be that at the individual level, there is quite a bit of variability between test scores. The overall consequence of this is that extreme scores will still occur simply by chance in large enough samples. To mitigate this issue, our recent work has developed measures of intra-individual FIP consistency to refine selection of those with superior abilities (i.e. from the upper tail). For starters, we assessed consistency of face matching and recognition in neurotypical controls, and compared them to a sample of SRs. In terms of face matching, we demonstrated psychophysically that SRs show significantly greater consistency than controls in exploiting spatial frequency information than controls. Meanwhile, we showed that SRs’ recognition of faces is highly related to memorability for identities, yet effectively unrelated among controls. So overall, at the high end of the FIP spectrum, consistency can be a useful tool for revealing both qualitative and quantitative individual differences. Finally, in conjunction with collaborators from the Rheinland-Pfalz Police, we developed a pair of bespoke work samples to get bias-free measures of intraindividual consistency in current law enforcement personnel. Officers with higher composite scores on a set of 3 challenging FIP tests tended to show higher consistency, and vice versa. Overall, this suggests that not only is consistency a reasonably good marker of superior FIP abilities, but could present important practical benefits for personnel selection in many other domains of expertise.
The diachronic account of attentional selectivity
Many models of attention assume that attentional selection takes place at a specific moment in time which demarcates the critical transition from pre-attentive to attentive processing of sensory input. We argue that this intuitively appealing account is not only inaccurate, but has led to substantial conceptual confusion (to the point where some attention researchers offer to abandon the term ‘attention’ altogether). As an alternative, we offer a “diachronic” framework that describes attentional selectivity as a process that unfolds over time. Key to this view is the concept of attentional episodes, brief periods of intense attentional amplification of sensory representations that regulate access to working memory and response-related processes. We describe how attentional episodes are linked to earlier attentional mechanisms and to recurrent processing at the neural level. We present data showing that multiple sequential events can be involuntarily encoded in working memory when they appear during the same attentional episode, whether they are relevant or not. We also discuss the costs associated with processing multiple events within a single episode. Finally, we argue that breaking down the dichotomy between pre-attentive and attentive (as well as early vs. late selection) offers new solutions to old problems in attention research that have never been resolved. It can provide a unified and conceptually coherent account of the network of cognitive and neural processes that produce the goal-directed selectivity in perceptual processing that is commonly referred to as “attention”.
Psychological essentialism in working memory research
Psychological essentialism is ubiquitous. It is one of primary bases of thoughts and behaviours throughout our entire lifetime. Human's such characteristics that find an unseen hidden entity behind observable phenomena or exemplars, however, lead us to somehow biased thinking and reasoning even in the realm of science, including psychology. For example, a latent variable extracted from various measurements is just a statistical property calculated in structural equation modeling, therefore, is not necessary to be a fundamental reality. Yet, we occasionally feel that there is the essential nature of such a psychological construct a priori. This talk will demonstrate examples of psychological essentialism in psychology and examine its resultant influences on working memory related issues, e. g., working memory training. Such demonstration, examination, and subsequent discussions on these topics will provide us an opportunity to reconsider the concept of working memory.
Exploring perceptual similarity and its relation to image-based spaces: an effect of familiarity
One challenge in exploring the internal representation of faces is the lack of controlled stimuli transformations. Researchers are often limited to verbalizable transformations in the creation of a dataset. An alternative approach to verbalization for interpretability is finding image-based measures that allow us to quantify image transformations. In this study, we explore whether PCA could be used to create controlled transformations to a face by testing the effect of these transformations on human perceptual similarity and on computational differences in Gabor, Pixel and DNN spaces. We found that perceptual similarity and the three image-based spaces are linearly related, almost perfectly in the case of the DNN, with a correlation of 0.94. This provides a controlled way to alter the appearance of a face. In experiment 2, the effect of familiarity on the perception of multidimensional transformations was explored. Our findings show that there is a positive relationship between the number of components transformed and both the perceptual similarity and the same three image-based spaces used in experiment 1. Furthermore, we found that familiar faces are rated more similar overall than unfamiliar faces. That is, a change to a familiar face is perceived as making less difference than the exact same change to an unfamiliar face. The ability to quantify, and thus control, these transformations is a powerful tool in exploring the factors that mediate a change in perceived identity.
Categories, language, and visual working memory: how verbal labels change capacity limitations
The limited capacity of visual working memory constrains the quantity and quality of the information we can store in mind for ongoing processing. Research from our lab has demonstrated that verbal labeling/categorization of visual inputs increases its retention and fidelity in visual working memory. In this talk, I will outline the hypotheses that explain the interaction between visual and verbal inputs in working memory, leading to the boosts we observed. I will further show how manipulations of the categorical distinctiveness of the labels, the timing of their occurrence, to which item labels are applied, as well as their validity modulate the benefits one can draw from combining visual and verbal inputs to alleviate capacity limitations. Finally, I will discuss the implications of these results to our understanding of working memory and its interaction with prior knowledge.
Differential working memory functioning
The integrated conflict monitoring theory of Botvinick introduced cognitive demand into conflict monitoring research. We investigated effects of individual differences of cognitive demand and another determinant of conflict monitoring entitled reinforcement sensitivity on conflict monitoring. We showed evidence of differential variability of conflict monitoring intensity using the electroencephalogram (EEG), functional magnet resonance imaging (fMRI) and behavioral data. Our data suggest that individual differences of anxiety and reasoning ability are differentially related to the recruitment of proactive and reactive cognitive control (cf. Braver). Based on previous findings, the team of the Leue-Lab investigated new psychometric data on conflict monitoring and proactive-reactive cognitive control. Moreover, data of the Leue-Lab suggest the relevance of individual differences of conflict monitoring for the context of deception. In this respect, we plan new studies highlighting individual differences of the functioning of the Anterior Cingulate Cortex (ACC). Disentangling the role of individual differences in working memory-related cognitive demand, mental effort, and reinforcement-related processes opens new insights for cognitive-motivational approaches of information processing (Passcode to rewatch: 0R8v&m59).
Impact evaluation for COVID-19 non-pharmaceutical interventions: what is (un)knowable?
COVID-19 non-pharmaceutical intervention (NPI) policies have been one of the most important and contentious decisions of our time. Beyond even the "normal" inherent difficulties in impact evaluation with observational data, COVID-19 NPI policy evaluation is complicated by additional challenges related to infectious disease dynamics and lags, lack of direct observation of key outcomes, and a multiplicity of interventions occurring on an accelerated time scale. Randomized controlled trials also suffer from what is feasible and ethical to randomize as well as the sheer scale, scope, time, and resources required for an NPI trial to be informative (or at least not misinformative). In this talk, Dr. Haber will discuss the challenges in generating useful evidence for COVID-19 NPIs, the landscape of the literature, and highlight key controversies in several high profile studies over the course of the pandemic. Chasing after unknowables poses major problems for the metascience/replicability movement, institutional research science, and decision makers. If the only choices for informing an important topic are "weak study design" vs "do nothing," when is "do nothing" the best choice?
Redressing imbalances in the kind of research that gets done and who gets credit for it
If we want good work to get done, we should credit those who do it. In science, researchers are credited predominantly via authorship on publications. But many contributions to modern research are not recognized with authorship, due in part to the high bar imposed by the authorship criteria of many journals. “Contributorship” is a more inclusive framework for indicating who did what in the work described by a paper, and many scientific journals have recently implemented versions of it. I will consider the motivation for and specifics of this change, describe the tenzing tool we created to facilitate it, and how we might want to support and shape the shift toward contributorship
Perception, attention, visual working memory, and decision making: The complete consort dancing together
Our research investigates how processes of attention, visual working memory (VWM), and decision-making combine to translate perception into action. Within this framework, the role of VWM is to form stable representations of transient stimulus events that allow them to be identified by a decision process, which we model as a diffusion process. In psychophysical tasks, we find the capacity of VWM is well defined by a sample-size model, which attributes changes in VWM precision with set-size to differences in the number evidence samples recruited to represent stimuli. In the first part of the talk, I review evidence for the sample-size model and highlight the model's strengths: It provides a parameter-free characterization of the set-size effect; it has plausible neural and cognitive interpretations; an attention-weighted version of the model accounts for the power-law of VWM, and it accounts for the selective updating of VWM in multiple-look experiments. In the second part of the talk, I provide a characterization of the theoretical relationship between two-choice and continuous-outcome decision tasks using the circular diffusion model, highlighting their common features. I describe recent work characterizing the joint distributions of decision outcomes and response times in continuous-outcome tasks using the circular diffusion model and show that the model can clearly distinguish variable-precision and simple mixture models of the evidence entering the decision process. The ability to distinguish these kinds of processes is central to current VWM studies.
Getting to know you: emerging neural representations during face familiarization
The successful recognition of familiar persons is critical for social interactions. Despite extensive research on the neural representations of familiar faces, we know little about how such representations unfold as someone becomes familiar. In three EEG experiments, we elucidated how representations of face familiarity and identity emerge from different qualities of familiarization: brief perceptual exposure (Experiment 1), extensive media familiarization (Experiment 2) and real-life personal familiarization (Experiment 3). Time-resolved representational similarity analysis revealed that familiarization quality has a profound impact on representations of face familiarity: they were strongly visible after personal familiarization, weaker after media familiarization, and absent after perceptual familiarization. Across all experiments, we found no enhancement of face identity representation, suggesting that familiarity and identity representations emerge independently during face familiarization. Our results emphasize the importance of extensive, real-life familiarization for the emergence of robust face familiarity representations, constraining models of face perception and recognition memory.
Why does online collaboration work? Insights into sequential collaboration
The last two decades have seen a rise in online projects such as Wikipedia or OpenStreetMap in which people collaborate to create a common product. Contributors in such projects often work together sequentially. Essentially, the first contributor generates an entry (e.g., a Wikipedia article) independently which is then adjusted in the following by other contributors by adding or correcting information. We refer to this way of working together as sequential collaboration. This process has not yet been studied in the context of judgment and decision making even though research has demonstrated that Wikipedia and OpenStreetMap yield very accurate information. In this talk, I give first insights into the structure of sequential collaboration, how adjusting each other’s judgments can yield more accurate final estimates, which boundary conditions need to be met, and which underlying mechanisms may be responsible for successful collaboration. A preprint is available at https://psyarxiv.com/w4xdk/
Lessons from the credibility revolution – social thermoregulation as a case study
The goal of this talk is to first provide a realization of why the replication crisis is omnipresent and then point to several tools via which the listener can improve their own work. To do so, I will go through our own work on social thermoregulation, point out why I thought changes were necessary, discuss which shortcomings we have in our own work, which measures we have taken to reduce those shortcomings, which tools we have relied on to do so, and which steps I believe we still need to make. Specifically, I will go through the following points: Major replication failures and data fabrication in the field of psychology; Replication failures of social thermoregulation studies; Realization that many of our studies were underpowered; Realization that many of our studies were very narrow in scope (i.e., in undergraduate students and mostly in EU/US); Realization that a lot of our measures were not independently validated. I will show these for our own work (but will also show why, via a meta-analysis, we have enough confidence to proceed with social thermoregulation research). Throughout the talk I will point you to the following tools that facilitate our work: Templates for exploratory and confirmatory research and for meta-analyses (developed for our work, but easily adaptable for other programs). I will also show you how to fork our templates; A lab philosophy; A research milestones sheet for collaborations and overviews; Excel sheet for contributorship; A tutorial for exploratory research; I would recommend listeners to read through this chapter before the talk (I will repeat a lot of that work, but I will go into greater depth). own work. To do so, I will go through our own work on social thermoregulation, point out why I thought changes were necessary, discuss which shortcomings we have in our own work, which measures we have taken to reduce those shortcomings, which tools we have relied on to do so, and which steps I believe we still need to make.
Beyond visual search: studying visual attention with multitarget visual foraging tasks
Visual attention refers to a set of processes allowing selection of relevant and filtering out of irrelevant information in the visual environment. A large amount of research on visual attention has involved visual search paradigms, where observers are asked to report whether a single target is present or absent. However, recent studies have revealed that these classic single-target visual search tasks only provide a snapshot of how attention is allocated in the visual environment, and that multitarget visual foraging tasks may capture the dynamics visual attention more accurately. In visual foraging, observers are asked to select multiple instances of multiple target types, as fast as they can. A critical question in foraging research concerns the factors driving the next target selection. Most likely, this would require two steps: (1) identifying a set of candidates for the next selection, and (2) selecting the best option among these candidates. After having briefly described the advantage of visual foraging over visual search, I will review recent visual foraging studies testing the influence of several manipulations (e.g., target crypticity, number of items, selection modality) on foraging behaviour. Overall, these studies revealed that the next target selection during visual foraging is determined by the competition between three factors: target value, target proximity, and priming of features. I will explain how the analysis of individual differences in foraging behaviour can provide important information, with the idea that individuals show by-default internal biases toward value, proximity and priming that determine their search strategy and behaviour.
The recent history of the replication crisis in psychology & how Open Science can be part of the solution
In recent years, more and more evidence has accumulated showing that many studies in psychological research cannot be replicated, effects are often overestimated, and little is publicly known about unsuccessful studies. What are the mechanisms behind this crisis? In this talk, I will explain how we got there and why it is still difficult to break free from the current system. I will further explain which role Open Science plays within the replication crisis and how it can help to improve science. This might sound like a pessimistic, negative talk, but I will end it on a positive note, I promise!
The problem of power in single-case neuropsychology
Case-control comparisons are a gold standard method for diagnosing and researching neuropsychological deficits and dissociations at the single-case level. These statistical tests, developed by John Crawford and collaborators, provide quantitative criteria for the classical concepts of deficit, dissociation and double-dissociation. Much attention has been given to the control of Type I (false positive) errors for these tests, but far less to the avoidance of Type II (false negative) errors; that is, to statistical power. I will describe the origins and limits of statistical power for case-control comparisons, showing that there are hard upper limits on power, which have important implications for the design and interpretation of single-case studies. My aim is to stimulate discussion of the inferential status of single-case neuropsychological evidence, particularly with respect to contemporary ideals of open science and study preregistration.
Exploring Memories of Scenes
State-of-the-art machine vision models can predict human recognition memory for complex scenes with astonishing accuracy. In this talk I present work that investigated how memorable scenes are actually remembered and experienced by human observers. We found that memorable scenes were recognized largely based on recollection of specific episodic details but also based on familiarity for an entire scene. I thus highlight current limitations in machine vision models emulating human recognition memory, with promising opportunities for future research. Moreover, we were interested in what observers specifically remember about complex scenes. We thus considered the functional role of eye-movements as a window into the content of memories, particularly when observers recollected specific information about a scene. We found that when observers formed a memory representation that they later recollected (compared to scenes that only felt familiar), the overall extent of exploration was broader, with a specific subset of fixations clustered around later to-be-recollected scene content, irrespective of the memorability of a scene. I discuss the critical role that our viewing behavior plays in visual memory formation and retrieval and point to potential implications for machine vision models predicting the content of human memories.
Accuracy versus consistency: Investigating face and voice matching abilities
Deciding whether two different face photographs or voice samples are from the same person represent fundamental challenges within applied settings. To date, most research has focussed on average performance in these tests, failing to consider individual differences and within-person consistency in responses. In the current studies, participants completed the same face or voice matching test on two separate occasions, allowing comparison of overall accuracy across the two timepoints as well as consistency in trial-level responses. In both experiments, participants were highly consistent in their performances. In addition, we demonstrated a large association between consistency and accuracy, with the most accurate participants also tending to be the most consistent. This is an important result for applied settings in which organisational groups of super-matchers are deployed in real-world contexts. Being able to reliably identify these high performers based upon only a single test informs regarding recruitment for law enforcement agencies worldwide.
A Manifesto for Big Team Science
Progress in psychology has been frustrated by challenges concerning replicability, generalizability, strategy selection, inferential reproducibility, and computational reproducibility. Although often discussed separately, I argue that these five challenges share a common cause: insufficient investment of resources into the typical psychology study. I further suggest that big team science can help address these challenges by allowing researchers to pool their resources to efficiently and drastically increase the amount of resources available for a single study. However, the current incentives, infrastructure, and institutions in academic science have all developed under the assumption that science is conducted by solo Principal Investigators and their dependent trainees. These barriers must be overcome if big team science is to be sustainable. Big team science likely also carries unique risks, such as the potential for big team science institutions to monopolize power, become overly conservative, make mistakes at a grand scale, or fail entirely due to mismanagement and a lack of financial sustainability. I illustrate the promise, barriers, and risks of big team science with the experiences of the Psychological Science Accelerator, a global research network of over 1400 members from 70+ countries.
Markers of brain connectivity and sleep-dependent restoration: basic research and translation into clinical populations
The human brain is a heavily interconnected structure giving rise to complex functions. While brain functionality is mostly revealed during wakefulness, the sleeping brain might offer another view into physiological and pathological brain connectivity. Furthermore, there is a large body of evidence supporting that sleep mediates plastic changes in brain connectivity. Although brain plasticity depends on environmental input which is provided in the waking state, disconnection during sleep might be necessary for integrating new into existing information and at the same time restoring brain efficiency. In this talk, I will present structural, molecular, and electrophysiological markers of brain connectivity and sleep-dependent restoration that we have evaluated using Magnetic Resonance Imaging and electroencephalography in a healthy population. In a second step, I will show how we translated the gained findings into two clinical populations in which alterations in brain connectivity have been described, the neuropsychiatric disorder attention-deficit/hyperactivity disorder (ADHD) and the neurologic disorder thalamic ischemic stroke.
Research coverage
47 items