vision
Latest
Dr. Tom Franken
A postdoctoral position is available in Dr. Tom Franken’s laboratory in the Department of Neuroscience at the Washington University School of Medicine in St. Louis. The project will study the neural circuits that parse visual scenes into organized collections of objects. We use a variety of techniques including high-density electrophysiology, behavior, optogenetics, and viral targeting in non-human primates. For more information on the lab, please visit sites.wustl.edu/frankenlab/. The PI is committed to mentoring and to nurturing a creative, thoughtful and collaborative lab culture. The laboratory is in an academic setting in the Department of Neuroscience at the Washington University School of Medicine in St. Louis, a large and collaborative scientific community. This provides an ideal environment to train, conduct research, and launch a career in science. Postdoctoral appointees at Washington University receive a competitive salary and a generous benefits package (hr.wustl.edu/benefits/). WashU Neuroscience is consistently ranked as one of the top 10 places worldwide for neuroscience research and offers an outstanding interdisciplinary training environment for early career researchers. In addition to high-quality research facilities, career and professional development training for postdoctoral researchers is provided through the Career Center, Teaching Center, Office of Postdoctoral Affairs, and campus groups. St. Louis is a city rich in culture, green spaces, free museums, world-class restaurants, and thriving music and arts scenes. On top of it all, St. Louis is affordable and commuting to campus is stress-free, whether you go by foot, bike, public transit, or car. The area combines the attractions of a major city with affordable lifestyle opportunities (postdoc.wustl.edu/prospective-postdocs/why-st-louis/). Washington University is dedicated to building a diverse community of individuals who are committed to contributing to an inclusive environment – fostering respect for all and welcoming individuals from diverse backgrounds, experiences and perspectives. Individuals with a commitment to these values are encouraged to apply. Additional information on being a postdoc at Washington University in St. Louis can be found at neuroscience.wustl.edu/education/postdoctoral-research/ and postdoc.wustl.edu/prospective-postdocs. Required Qualifications Ph.D. (or equivalent doctoral) degree in neuroscience (broadly defined). Strong background in either electrophysiology, behavioral techniques or scientific programming/machine learning. Preferred Qualifications Experience with training of larger animals. Experience with electrophysiology. Experience with studies of the visual system. Ability to think creatively to solve problems. Well organized and attention to detail. Excellent oral and written communication skills. Team player with a high level of initiative and motivation. Working Conditions This position works in a laboratory environment with potential exposure to biological and chemical hazards. The individual must be physically able to wear protective equipment and to provide standard care to research animals. Salary Range Base pay is commensurate with experience. Applicant Special Instructions Applicants should submit the following materials to Dr. Tom Franken at ftom@wustl.edu: 1) A cover letter explaining how their interest in the position matches their background and career goals. 2) CV or Biosketch. 3) Contact information for at least three professional references. Accommodation If you are unable to use our online application system and would like an accommodation, please email CandidateQuestions@wustl.edu or call the dedicated accommodation inquiry number at 314-935-1149 and leave a voicemail with the nature of your request. Pre-Employment Screening All external candidates receiving an offer for employment will be required to submit to pre-employment screening for this position. The screenings will include criminal background check and, as applicable for the position, other background checks, drug screen, an employment and education or licensure/certification verification, physical examination, certain vaccinations and/or governmental registry checks. All offers are contingent upon successful completion of required screening. Benefits Statement Washington University in St. Louis is committed to providing a comprehensive and competitive benefits package to our employees. Benefits eligibility is subject to employment status, full-time equivalent (FTE) workload, and weekly standard hours. Please visit our website at https://hr.wustl.edu/benefits/ to view a summary of benefits. EEO/AA Statement Washington University in St. Louis is committed to the principles and practices of equal employment opportunity and especially encourages applications by those from underrepresented groups. It is the University’s policy to provide equal opportunity and access to persons in all job titles without regard to race, ethnicity, color, national origin, age, religion, sex, sexual orientation, gender identity or expression, disability, protected veteran status, or genetic information. Diversity Statement Washington University is dedicated to building a diverse community of individuals who are committed to contributing to an inclusive environment – fostering respect for all and welcoming individuals from diverse backgrounds, experiences and perspectives. Individuals with a commitment to these values are encouraged to apply.
Prof. Li Zhaoping
Postdoctoral position in Human Visual Psychophysics with fMRI/MRI, (m/f/d) (TVöD-Bund E13, 100%) The Department of Sensory and Sensorimotor Systems (PI Prof. Li Zhaoping) at the Max Planck Institute for Biological Cybernetics and at the University of Tübingen is currently looking for highly skilled and motivated individuals to work on projects aimed towards understanding visual attentional and perceptual processes using fMRI/MRI. The framework and motivation of the projects can be found at: https://www.lizhaoping.org/zhaoping/AGZL_HumanVisual.html. The projects can involve, for example, visual search tasks, stereo vision tasks, visual illusions, and will be discussed during the application process. fMRI/MRI technology can be used in combination with other methods such as eye tracking, TMS and/or EEG methodologies, and other related methods as necessary. The postdoc will be working closely with the principal investigator and other members of Zhaoping's team when needed. Responsibilities: • Conduct and participate in research projects such as lab and equipment set up, data collection, data analysis, writing reports and papers, and presenting at scientific conferences. • Participate in routine laboratory operations, such as planning and preparations for experiments, lab maintenance and lab procedures. • Coordinate with the PI and other team members for strategies and project planning. • Coordinate with the PI and other team members for project planning, and in supervision of student projects or teaching assistance for university courses in our field. Who we are: We use a multidisciplinary approach to investigate sensory and sensory-motor transforms in the brain (www.lizhaoping.org). Our approaches consist of both theoretical and experimental techniques including human psychophysics, fMRI imaging, EEG/ERP, and computational modelling. One part of our group is located in the University, in the Centre for Integrative Neurosciences (CIN), and the other part is in the Max Planck Institute (MPI) for Biological Cybernetics as the Department for Sensory and Sensorimotor Systems. You will have the opportunity to learn other skills in our multidisciplinary group and benefit from interactions with our colleagues in the university, at MPI, as well as internationally. This job opening is for the CIN or the MPI working group. The position (salary level TVöD-Bund E13, 100%) is for a duration of two years. Extension or a permanent contract after two years is possible depending on situations. We seek to raise the number of women in research and teaching and therefore urge qualified women to apply. Disabled persons will be preferred in case of equal qualification. Your application: The position is available immediately and will be open until filled. Preference will be given to applications received by March 19th, 2023. We look forward to receiving your application that includes (1) a cover letter, including a statement on roughly when you would like to start this position, (2) a motivation statement, (3) a CV, (4) names and contact details of three people for references, (5) if you have them, transcripts from your past and current education listing the courses taken and their grades, (6) if you have them, please also include copies of your degree certificates, (7) you may include a pdf file of your best publication(s), or other documents and information that you think could strengthen your application. Please use pdf files for these documents (and you may combine them into a single pdf file) and send to jobs.li@tuebingen.mpg.de, where also informal inquiries can be addressed. Please note that applications without complete information in (1)-(4) will not be considered, unless the cover letter includes an explanation and/or information about when the needed materials will be supplied. For further opportunities in our group, please visit https://www.lizhaoping.org/jobs.html
IMPRS for Brain & Behavior
Join our unique transatlantic PhD program in neuroscience! The International Max Planck Research School (IMPRS) for Brain and Behavior is a unique transatlantic collaboration between two Max Planck Neuroscience institutes – the Max Planck-associated research center caesar and the Max Planck Florida Institute for Neuroscience – and the partner universities, University of Bonn and Florida Atlantic University. It offers a completely funded international PhD program in neuroscience in either Bonn, Germany, or Jupiter, Florida. We offer an exciting opportunity to outstanding Bachelor's and/or Master's degree holders (or equivalent) from any field (life sciences, mathematics, physics, computer science, engineering, etc.) to be immersed in a stimulating environment that provides novel technologies to elucidate the function of brain circuits from molecules to animal behavior. The comprehensive and diverse expertise of the faculty in the exploration of brain-circuit function using advanced imaging and optogenetic techniques combined with comprehensive training in fundamental neurobiology will provide students with an exceptional level of knowledge to pursue a successful independent research career. Apply to Bonn, Germany by November 15, 2020 or to Florida, USA by December 1, 2020!
sensorimotor control, mouvement, touch, EEG
Traditionally, touch is associated with exteroception and is rarely considered a relevant sensory cue for controlling movements in space, unlike vision. We developed a technique to isolate and measure tactile involvement in controlling sliding finger movements over a surface. Young adults traced a 2D shape with their index finger under direct or mirror-reversed visual feedback to create a conflict between visual and somatosensory inputs. In this context, increased reliance on somatosensory input compromises movement accuracy. Based on the hypothesis that tactile cues contribute to guiding hand movements when in contact with a surface, we predicted poorer performance when the participants traced with their bare finger compared to when their tactile sensation was dampened by a smooth, rigid finger splint. The results supported this prediction. EEG source analyses revealed smaller current in the source-localized somatosensory cortex during sensory conflict when the finger directly touched the surface. This finding supports the hypothesis that, in response to mirror-reversed visual feedback, the central nervous system selectively gated task-irrelevant somatosensory inputs, thereby mitigating, though not entirely resolving, the visuo-somatosensory conflict. Together, our results emphasize touch’s involvement in movement control over a surface, challenging the notion that vision predominantly governs goal-directed hand or finger movements.
Computational Mechanisms of Predictive Processing in Brains and Machines
Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.
Go with the visual flow: circuit mechanisms for gaze control during locomotion
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
The Systems Vision Science Summer School & Symposium, August 11 – 22, 2025, Tuebingen, Germany
Applications are invited for our third edition of Systems Vision Science (SVS) summer school since 2023, designed for everyone interested in gaining a systems level understanding of biological vision. We plan a coherent, graduate-level, syllabus on the integration of experimental data with theory and models, featuring lectures, guided exercises and discussion sessions. The summer school will end with a Systems Vision Science symposium on frontier topics on August 20-22, with additional invited and contributed presentations and posters. Call for contributions and participations to the symposium will be sent out spring of 2025. All summer school participants are invited to attend, and welcome to submit contributions to the symposium.
Continuity and segmentation - two ends of a spectrum or independent processes?
Representational drift in human visual cortex
“Brain theory, what is it or what should it be?”
n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.
Seeing a changing world through the eyes of coral fishes
From Spiking Predictive Coding to Learning Abstract Object Representation
In a first part of the talk, I will present Predictive Coding Light (PCL), a novel unsupervised learning architecture for spiking neural networks. In contrast to conventional predictive coding approaches, which only transmit prediction errors to higher processing stages, PCL learns inhibitory lateral and top-down connectivity to suppress the most predictable spikes and passes a compressed representation of the input to higher processing stages. We show that PCL reproduces a range of biological findings and exhibits a favorable tradeoff between energy consumption and downstream classification performance on challenging benchmarks. A second part of the talk will feature our lab’s efforts to explain how infants and toddlers might learn abstract object representations without supervision. I will present deep learning models that exploit the temporal and multimodal structure of their sensory inputs to learn representations of individual objects, object categories, or abstract super-categories such as „kitchen object“ in a fully unsupervised fashion. These models offer a parsimonious account of how abstract semantic knowledge may be rooted in children's embodied first-person experiences.
“Development and application of gaze control models for active perception”
Gaze shifts in humans serve to direct high-resolution vision provided by the fovea towards areas in the environment. Gaze can be considered a proxy for attention or indicator of the relative importance of different parts of the environment. In this talk, we discuss the development of generative models of human gaze in response to visual input. We discuss how such models can be learned, both using supervised learning and using implicit feedback as an agent interacts with the environment, the latter being more plausible in biological agents. We also discuss two ways such models can be used. First, they can be used to improve the performance of artificial autonomous systems, in applications such as autonomous navigation. Second, because these models are contingent on the human’s task, goals, and/or state in the context of the environment, observations of gaze can be used to infer information about user intent. This information can be used to improve human-machine and human robot interaction, by making interfaces more anticipative. We discuss example applications in gaze-typing, robotic tele-operation and human-robot interaction.
The Unconscious Eye: What Involuntary Eye Movements Reveal About Brain Processing
Neuro-Optometric Rehabilitation - an introduction to the diagnosis and treatment of vision disorders secondary to neurological impairment
Restoring Sight to the Blind: Effects of Structural and Functional Plasticity
Visual restoration after decades of blindness is now becoming possible by means of retinal and cortical prostheses, as well as emerging stem cell and gene therapeutic approaches. After restoring visual perception, however, a key question remains. Are there optimal means and methods for retraining the visual cortex to process visual inputs, and for learning or relearning to “see”? Up to this point, it has been largely assumed that if the sensory loss is visual, then the rehabilitation focus should also be primarily visual. However, the other senses play a key role in visual rehabilitation due to the plastic repurposing of visual cortex during blindness by audition and somatosensation, and also to the reintegration of restored vision with the other senses. I will present multisensory neuroimaging results, cortical thickness changes, as well as behavioral outcomes for patients with Retinitis Pigmentosa (RP), which causes blindness by destroying photoreceptors in the retina. These patients have had their vision partially restored by the implantation of a retinal prosthesis, which electrically stimulates still viable retinal ganglion cells in the eye. Our multisensory and structural neuroimaging and behavioral results suggest a new, holistic concept of visual rehabilitation that leverages rather than neglects audition, somatosensation, and other sensory modalities.
Cognitive maps, navigational strategies, and the human brain
The hippocampus, visual perception and visual memory
Reading Scenes
Plasticity of the adult visual system
Retinal input integration in excitatory and inhibitory neurons in the mouse superior colliculus in vivo
An inconvenient truth: pathophysiological remodeling of the inner retina in photoreceptor degeneration
Photoreceptor loss is the primary cause behind vision impairment and blindness in diseases such as retinitis pigmentosa and age-related macular degeneration. However, the death of rods and cones allows retinoids to permeate the inner retina, causing retinal ganglion cells to become spontaneously hyperactive, severely reducing the signal-to-noise ratio, and creating interference in the communication between the surviving retina and the brain. Treatments aimed at blocking or reducing hyperactivity improve vision initiated from surviving photoreceptors and could enhance the signal fidelity generated by vision restoration methodologies.
The speed of prioritizing information for consciousness: A robust and mysterious human trait
Altered grid-like coding in early blind people and the role of vision in conceptual navigation
Vision for perception versus vision for action: dissociable contributions of visual sensory drives from primary visual cortex and superior colliculus neurons to orienting behaviors
The primary visual cortex (V1) directly projects to the superior colliculus (SC) and is believed to provide sensory drive for eye movements. Consistent with this, a majority of saccade-related SC neurons also exhibit short-latency, stimulus-driven visual responses, which are additionally feature-tuned. However, direct neurophysiological comparisons of the visual response properties of the two anatomically-connected brain areas are surprisingly lacking, especially with respect to active looking behaviors. I will describe a series of experiments characterizing visual response properties in primate V1 and SC neurons, exploring feature dimensions like visual field location, spatial frequency, orientation, contrast, and luminance polarity. The results suggest a substantial, qualitative reformatting of SC visual responses when compared to V1. For example, SC visual response latencies are actively delayed, independent of individual neuron tuning preferences, as a function of increasing spatial frequency, and this phenomenon is directly correlated with saccadic reaction times. Such “coarse-to-fine” rank ordering of SC visual response latencies as a function of spatial frequency is much weaker in V1, suggesting a dissociation of V1 responses from saccade timing. Consistent with this, when we next explored trial-by-trial correlations of individual neurons’ visual response strengths and visual response latencies with saccadic reaction times, we found that most SC neurons exhibited, on a trial-by-trial basis, stronger and earlier visual responses for faster saccadic reaction times. Moreover, these correlations were substantially higher for visual-motor neurons in the intermediate and deep layers than for more superficial visual-only neurons. No such correlations existed systematically in V1. Thus, visual responses in SC and V1 serve fundamentally different roles in active vision: V1 jumpstarts sensing and image analysis, but SC jumpstarts moving. I will finish by demonstrating, using V1 reversible inactivation, that, despite reformatting of signals from V1 to the brainstem, V1 is still a necessary gateway for visually-driven oculomotor responses to occur, even for the most reflexive of eye movement phenomena. This is a fundamental difference from rodent studies demonstrating clear V1-independent processing in afferent visual pathways bypassing the geniculostriate one, and it demonstrates the importance of multi-species comparisons in the study of oculomotor control.
Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge
Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.
Guiding Visual Attention in Dynamic Scenes
Rethinking Attention: Dynamic Prioritization
Decades of research on understanding the mechanisms of attentional selection have focused on identifying the units (representations) on which attention operates in order to guide prioritized sensory processing. These attentional units fit neatly to accommodate our understanding of how attention is allocated in a top-down, bottom-up, or historical fashion. In this talk, I will focus on attentional phenomena that are not easily accommodated within current theories of attentional selection – the “attentional platypuses,” as they allude to an observation that within biological taxonomies the platypus does not fit into either mammal or bird categories. Similarly, attentional phenomena that do not fit neatly within current attentional models suggest that current models need to be revised. I list a few instances of the ‘attentional platypuses” and then offer a new approach, the Dynamically Weighted Prioritization, stipulating that multiple factors impinge onto the attentional priority map, each with a corresponding weight. The interaction between factors and their corresponding weights determines the current state of the priority map which subsequently constrains/guides attention allocation. I propose that this new approach should be considered as a supplement to existing models of attention, especially those that emphasize categorical organizations.
Traumatic brain injury and the visual sequela
Mind Perception and Behaviour: A Study of Quantitative and Qualitative Effects
Perceptual illusions we understand well, and illusions which aren’t really illusions
Imagining and seeing: two faces of prosopagnosia
Reactivation in the human brain connects the past with the present
Attending to moments in time
Visuomotor learning of location, action, and prediction
Trends in NeuroAI - Brain-like topography in transformers (Topoformer)
Dr. Nicholas Blauch will present on his work "Topoformer: Brain-like topographic organization in transformer language models through spatial querying and reweighting". Dr. Blauch is a postdoctoral fellow in the Harvard Vision Lab advised by Talia Konkle and George Alvarez. Paper link: https://openreview.net/pdf?id=3pLMzgoZSA Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).
Retinal Photoreceptor Diversity Across Mammals
Generative models for video games (rescheduled)
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Applied cognitive neuroscience to improve learning and therapeutics
Advancements in cognitive neuroscience have provided profound insights into the workings of the human brain and the methods used offer opportunities to enhance performance, cognition, and mental health. Drawing upon interdisciplinary collaborations in the University of California San Diego, Human Performance Optimization Lab, this talk explores the application of cognitive neuroscience principles in three domains to improve human performance and alleviate mental health challenges. The first section will discuss studies addressing the role of vision and oculomotor function in athletic performance and the potential to train these foundational abilities to improve performance and sports outcomes. The second domain considers the use of electrophysiological measurements of the brain and heart to detect, and possibly predict, errors in manual performance, as shown in a series of studies with surgeons as they perform robot-assisted surgery. Lastly, findings from clinical trials testing personalized interventional treatments for mood disorders will be discussed in which the temporal and spatial parameters of transcranial magnetic stimulation (TMS) are individualized to test if personalization improves treatment response and can be used as predictive biomarkers to guide treatment selection. Together, these translational studies use the measurement tools and constructs of cognitive neuroscience to improve human performance and well-being.
Characterizing the causal role of large-scale network interactions in supporting complex cognition
Neuroimaging has greatly extended our capacity to study the workings of the human brain. Despite the wealth of knowledge this tool has generated however, there are still critical gaps in our understanding. While tremendous progress has been made in mapping areas of the brain that are specialized for particular stimuli, or cognitive processes, we still know very little about how large-scale interactions between different cortical networks facilitate the integration of information and the execution of complex tasks. Yet even the simplest behavioral tasks are complex, requiring integration over multiple cognitive domains. Our knowledge falls short not only in understanding how this integration takes place, but also in what drives the profound variation in behavior that can be observed on almost every task, even within the typically developing (TD) population. The search for the neural underpinnings of individual differences is important not only philosophically, but also in the service of precision medicine. We approach these questions using a three-pronged approach. First, we create a battery of behavioral tasks from which we can calculate objective measures for different aspects of the behaviors of interest, with sufficient variance across the TD population. Second, using these individual differences in behavior, we identify the neural variance which explains the behavioral variance at the network level. Finally, using covert neurofeedback, we perturb the networks hypothesized to correspond to each of these components, thus directly testing their casual contribution. I will discuss our overall approach, as well as a few of the new directions we are currently pursuing.
Vision Unveiled: Understanding Face Perception in Children Treated for Congenital Blindness
Generative models for video games
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Inhibition in the retina
Perception in Autism: Testing Recent Bayesian Inference Accounts
Computer vision and image processing applications on astrocyte-glioma interactions in 3D cell culture
FENS Forum 2024
Oculomotor vergence system through fMRI in persons with binocularly normal vision and persistent post-concussive symptoms with convergence insufficiency
FENS Forum 2024
Scalable microelectrode arrays: moving beyond time division multiplexing
Bernstein Conference 2024
Timing and transmission: the role of axonal action potential propagation speed in the synchronization of foveal vision
Bernstein Conference 2024
Evaluating Noise Tolerance in Drosophila Vision
COSYNE 2022
An insect vision-based flight control model with a plastic efference copy
COSYNE 2022
An insect vision-based flight control model with a plastic efference copy
COSYNE 2022
Organization of local directionally selective neurons informs global motion vision encoding
COSYNE 2022
Organization of local directionally selective neurons informs global motion vision encoding
COSYNE 2022
A two-way luminance gain control in the fly brain ensures luminance invariance in dynamic vision
COSYNE 2022
A two-way luminance gain control in the fly brain ensures luminance invariance in dynamic vision
COSYNE 2022
Inferring the order of stable and context dependent perceptual biases in human vision
COSYNE 2023
Leveraging computational and animal models of vision to probe atypical emotion recognition in autism
COSYNE 2023
Biologically Realistic Computational Primitives of Neocortex Implemented on Neuromorphic Hardware Improve Vision Transformer Performance
COSYNE 2025
Enhancing Vision Robustness to Adversarial Attacks through Foveal-Peripheral and Saccadic Mechanisms
COSYNE 2025
Recurrent connectivity supports motion detection in connectome-constrained models of fly vision
COSYNE 2025
TweetyBERT, a self-supervised vision transformer to automate birdsong annotation
COSYNE 2025
Analyzing animal behavior with domain-adapted vision-language models
FENS Forum 2024
Compromised binocular vision and reduced binocularity in the visual cortex of postsynaptic density 95 (PSD-95) knock-out mice
FENS Forum 2024
Connectional subdivisions reflect neuronal features of the various sectors of the macaque ventrolateral prefrontal cortex
FENS Forum 2024
Exploring laryngeal effects of dorsolateral periaqueductal grey stimulation in anesthetized rats: Implications for c-Fos and FOXP2 expression in the nucleus ambiguus subdivisions
FENS Forum 2024
Exploring pupil dynamics in freely moving rats during active integration of vision and posture
FENS Forum 2024
Impact of barrel cortex lesions and sensory deprivation on perceptual decision-making: Insights from computer vision and time series clustering of freely moving behavioral strategies
FENS Forum 2024
The impact of the retinotopic subdivisions of area V1 on shaping the macaque connectome
FENS Forum 2024
Optogenetic stimulation in the visual thalamus for future brain vision prostheses
FENS Forum 2024
Projections from the ventral nucleus of the trapezoid body to all subdivisions of the rat cochlear nucleus
FENS Forum 2024
REST as a target for vision restoration
FENS Forum 2024
A retinotopic-and-orientation-based stimulation strategy induces neural activity patterns mimicking natural vision
FENS Forum 2024
Searching for input-output connectivity streams in the various subdivisions of mouse orbitofrontal cortex
FENS Forum 2024
Study of brain plasticity following loss of monocular vision in mice
FENS Forum 2024
Vision revival: NGF’s role in restoring retinal balance and activating BDNF’s lifesaving routes in diabetic retinopathy
FENS Forum 2024
When auditory rules over vision: The impact of temporal synchrony for auditory-visual sensory attenuation effects
FENS Forum 2024
AVATAR: AI Vision Analysis for Three-dimensional Action in Real-time
Neuromatch 5
vision coverage
86 items