modeling
Pick a domain context
This cross-domain view is for discovery. Choose a domain-scoped topic page for the canonical URL.
University of Chicago - Grossman Center for Quantitative Biology and Human Behavior
The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience.
Tansu Celikel
The School of Psychology (psychology.gatech.edu/) at the GEORGIA INSTITUTE OF TECHNOLOGY (www.gatech.edu/) invites nominations and applications for 5 open-rank tenure-track faculty positionswith an anticipated start date of August 2023 or later. The successful applicant will be expected to demonstrate and develop an exceptional research program. The research area is open, but we are particularly interested in candidates whose scholarship complements existing School strengths in Adult Development and Aging, Cognition and Brain Science, Engineering Psychology, Work and Organizational Psychology, and Quantitative Psychology, and takes advantage of quantitative, mathematical, and/or computational methods. The School of Psychology is well-positioned in the College of Sciences at Georgia Tech, a University that promotes translational research from the laboratory and field to real-world applications in a variety of areas. The School offers multidisciplinary educational programs, graduate training, and research opportunities in the study of mind, brain, and behavior and the associated development of technologies that can improve human experience. Excellent research facilities support the School’s research and interdisciplinary graduate programs across the Institute. Georgia Tech’s commitment to interdisciplinary collaboration has fostered fruitful interactions between psychology faculty and faculty in the sciences, computing, business, engineering, design, and liberal arts. Located in the heart of Atlanta, one of the nation's most academic, entrepreneurial, creative and diverse cities with excellent quality of life, the School actively develops and maintains a rich network of academic and applied behavioral science/industrial partnerships in and beyond Atlanta. Candidates whose research programs foster collaborative interactions with other members of the School and further contribute to bridge-building with other academic and research units at Tech and industries are particularly encouraged to apply. Applications can be submitted online (bit.ly/Join-us-at-GT-Psych) and should include a Cover Letter, Curriculum Vitae (including a list of publications), Research Statement, Teaching Statement, DEI (diversity, equity, and inclusion) statement, and contact information of at least three individuals who have agreed to provide a reference in support of the application if asked. Evaluation of applications will begin October 10th, 2022 and continue until all positions are filled. Questions about this search can be addressed to faculty_search@psych.gatech.edu. Portal questions will be answered by Tikica Platt, the School’s HR director, and questions about positions by the co-chairs of the search committee, Ruth Kanfer and Tansu Celikel.
Prof Geoff Goodhill
A new NIH-funded collaboration between David Prober (Caltech), Thai Truong (USC) and Geoff Goodhill (Washington University in St Louis) aims to gain new insight into the neural circuits underlying sleep, through a combination of whole-brain neural recordings in zebrafish and theoretical/computational modeling. The Goodhill lab is now looking for 2 postdocs for the modeling and computational analysis components. Using novel 2-photon imaging technologies Prober and Truong will record from the entire larval zebrafish brain at single-neuron resolution continuously for long periods of time, examining neural circuit activity during normal day-night cycles and in response to genetic and pharmacological perturbations. The Goodhill lab will analyze the resulting huge datasets using a variety of sophisticated computational approaches, and use these results to build new theoretical models that reveal how neural circuits interact to govern sleep. Theoretical and experimental work will be intimately linked.
Navin Pokala
The Department of Biological and Chemical Sciences at New York Institute of Technology seeks outstanding applicants for a tenure-track position at the Assistant Professor level to develop a research program in the broadly defined fields of biostatistics, bioinformatics or computational biology that complements existing research programs and carries potential to establish external collaborations. The successful candidate will teach introductory and advanced courses in the biological sciences at the undergraduate level, notably Biostatistics. The Department has undergraduate programs in Biology, Chemistry, and Biotechnology at the New York City and Long Island (Old Westbury) campuses. New York Tech emphasizes interdisciplinary scholarship, research, and teaching. Department faculty research interests are diverse, including medicinal and organic chemistry, neuroscience, cell and molecular biology, genetics, biochemistry, microbiology, computational chemistry, and analytical chemistry. Faculty in the Department have ample opportunity to collaborate with faculty at the New York Tech’s College of Engineering and Computer Sciences and College of Osteopathic Medicine.
Hayder Amin
The position is focused on developing a brain-inspired computational model using parallel, non-linear algorithms to investigate the neurogenesis complexity in large-scale systems. The successful applicant will specifically develop a neurogenic-plasticity-inspired bottom-up computational metamodel using our unique experimentally derived multidimensional parameters for a cortico-hippocampal circuit. The project aims to link computational modeling to experimental neuroscience to provide an explicit bidirectional prediction for complex performance and neurogenic network reserve for functional compensation to the brain demands in health and disease.
Dr. Simon Danner
A Postdoctoral Fellow/Research Associate position is available in Dr. Simon Danner’s laboratory at the Department of Neurobiology and Anatomy, Drexel University College of Medicine to study the spinal locomotor circuitry and its interactions with the musculoskeletal system and afferent feedback. The qualified postdoc will work on several collaborative, interdisciplinary, NIH-funded projects to uncover the connectivity and function of somatosensory afferents and various genetically or anatomically identified interneurons. The studies involve the development of computer models of mouse, rat, and cat biomechanics connected with models of the spinal locomotor circuitry. The successful candidate will closely collaborate with other computational and experimental neuroscientists: they will use experimental data to implement and refine the model, and use the model to derive predictions that will then be tested experimentally by our collaborators. Essential Functions: • Work with existing and develop new biomechanical models of the mouse, rat and cat • Develop neural network models of the spinal locomotor circuits • Integrate the neural network and biomechanical models to simulate locomotor behavior • Use numerical optimization to optimize the neuromechanical models • Apply machine learning/reinforcement learning • Use the models to derive experimentally testable predictions • Closely collaborate with experimental neuroscientists • Analyze kinematic and electrophysiological data • Write and submit research manuscripts • Present novel findings at national and international conferences The qualified candidate will benefit from joining a well-funded research group that works in a dynamic, collaborative and interdisciplinary environment. The highly collegial Danner lab is a member of the Neuroengineering Program, the Theoretical & Computational Neuroscience group, and the Spinal Cord Research Center within Drexel University College of Medicine’s Department of Neurobiology and Anatomy (http://drexel.edu/medicine/About/Departments/Neurobiology-Anatomy/) in Philadelphia, PA. The Department provides an outstanding scientific environment for multidisciplinary training. Interactions and collaborations between labs and between other departments are encouraged.
Marwen Belkaid
A thesis offer in neurorobotics is available on the topic of modeling affective processes for visual attention, decision-making and social behaviors. The thesis project will be carried out at the ETIS Laboratory (CNRS UMR8051) of CY Cergy Paris University, located in the Paris region.
Geoffrey J Goodhill
An NIH-funded collaboration between David Prober (Caltech), Thai Truong (USC) and Geoff Goodhill (Washington University in St Louis) aims to gain new insight into the neural circuits underlying sleep, through a combination of whole-brain neural recordings in zebrafish and theoretical/computational modeling. A postdoc position is available in the Goodhill lab to contribute to the modeling and computational analysis components. Using novel 2-photon imaging technologies Prober and Truong are recording from the entire larval zebrafish brain at single-neuron resolution continuously for long periods of time, examining neural circuit activity during normal day-night cycles and in response to genetic and pharmacological perturbations. The Goodhill lab is analyzing the resulting huge datasets using a variety of sophisticated computational approaches, and using these results to build new theoretical models that reveal how neural circuits interact to govern sleep.
Benoît Frénay
The Faculty of Computer Science at UNamur has an open academic position in distributed systems. The Faculty is more than 50 years old and is located in Namur, an historical city at the center of Wallonia. We are looking for candidates that will create and contribute to collaborations in line with the Faculty's areas of interest (software engineering, data engineering, data science, artificial intelligence, security, formal methods, modeling...).
Chloé Bourgeois-Antonini
The M.Sc. Mod4NeuCog is a two-year interdisciplinary master's program at Université Côte d’Azur (Nice, France), which aims to train active researchers at the crossroads of computer science, applied mathematics and cognitive neuroscience. Students will learn to model cognitive functions using mathematical and computational tools and will be specialized in computational neuro/cognitive science, able to work in fully interdisciplinary settings, with a strong foundation in mathematics.
Prof. Erik De Schutter
A postdoctoral position is available in the Computational Neuroscience Unit (https://groups.oist.jp/cnu) of Prof. Erik De Schutter at the Okinawa Institute of Science and Technology for a researcher interested in modeling to help better understanding of cerebellar properties and function. Candidates should have good knowledge of cerebellar anatomy and physiology related to previous modeling work, and be open to an explorative approach. Depending on prior experience and interest the focus can be on modeling the cerebellum and its neurons and/or on analyzing experimental data obtained through collaboration. The postdoc will interact with other researchers and students in the lab who are working on cerebellar modeling projects. We offer attractive working conditions in an English language graduate university, located on a beautiful subtropical island. Starting date any time before end 2024. Send curriculum vitae, summary of research interests and experience, and the names of two referees to Prof. Erik De Schutter at erik@oist.jp
Stefan Kiebel
We are hiring a Postdoctoral Research Associate at the Chair of Cognitive Computational Neuroscience, TU Dresden (Germany). The position is part of a DFG-funded project on neurocomputational mechanisms of decision-making through forward planning and state abstraction. Project highlights include modeling human learning and decision-making using probabilistic approaches, analyzing behavioral and fMRI data, collaborative work with experimentalists and theorists, and the opportunity to design and run experiments.
Organization of thalamic networks and mechanisms of dysfunction in schizophrenia and autism
Thalamic networks, at the core of thalamocortical and thalamosubcortical communications, underlie processes of perception, attention, memory, emotions, and the sleep-wake cycle, and are disrupted in mental disorders, including schizophrenia and autism. However, the underlying mechanisms of pathology are unknown. I will present novel evidence on key organizational principles, structural, and molecular features of thalamocortical networks, as well as critical thalamic pathway interactions that are likely affected in disorders. This data can facilitate modeling typical and abnormal brain function and can provide the foundation to understand heterogeneous disruption of these networks in sleep disorders, attention deficits, and cognitive and affective impairments in schizophrenia and autism, with important implications for the design of targeted therapeutic interventions
OpenNeuro FitLins GLM: An Accessible, Semi-Automated Pipeline for OpenNeuro Task fMRI Analysis
In this talk, I will discuss the OpenNeuro Fitlins GLM package and provide an illustration of the analytic workflow. OpenNeuro FitLins GLM is a semi-automated pipeline that reduces barriers to analyzing task-based fMRI data from OpenNeuro's 600+ task datasets. Created for psychology, psychiatry and cognitive neuroscience researchers without extensive computational expertise, this tool automates what is largely a manual process and compilation of in-house scripts for data retrieval, validation, quality control, statistical modeling and reporting that, in some cases, may require weeks of effort. The workflow abides by open-science practices, enhancing reproducibility and incorporates community feedback for model improvement. The pipeline integrates BIDS-compliant datasets and fMRIPrep preprocessed derivatives, and dynamically creates BIDS Statistical Model specifications (with Fitlins) to perform common mass univariate [GLM] analyses. To enhance and standardize reporting, it generates comprehensive reports which includes design matrices, statistical maps and COBIDAS-aligned reporting that is fully reproducible from the model specifications and derivatives. OpenNeuro Fitlins GLM has been tested on over 30 datasets spanning 50+ unique fMRI tasks (e.g., working memory, social processing, emotion regulation, decision-making, motor paradigms), reducing analysis times from weeks to hours when using high-performance computers, thereby enabling researchers to conduct robust single-study, meta- and mega-analyses of task fMRI data with significantly improved accessibility, standardized reporting and reproducibility.
Neurobiological constraints on learning: bug or feature?
Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.
Neural mechanisms of rhythmic motor control in Drosophila
All animal locomotion is rhythmic,whether it is achieved through undulatory movement of the whole body or the coordination of articulated limbs. Neurobiologists have long studied locomotor circuits that produce rhythmic activity with non-rhythmic input, also called central pattern generators (CPGs). However, the cellular and microcircuit implementation of a walking CPG has not been described for any limbed animal. New comprehensive connectomes of the fruit fly ventral nerve cord (VNC) provide an opportunity to study rhythmogenic walking circuits at a synaptic scale.We use a data-driven network modeling approach to identify and characterize a putative walking CPG in the Drosophila leg motor system.
Computational modelling of ocular pharmacokinetics
Pharmacokinetics in the eye is an important factor for the success of ocular drug delivery and treatment. Pharmacokinetic features determine the feasible routes of drug administration, dosing levels and intervals, and it has impact on eventual drug responses. Several physical, biochemical, and flow-related barriers limit drug exposure of anterior and posterior ocular target tissues during treatment during local (topical, subconjunctival, intravitreal) and systemic administration (intravenous, per oral). Mathematical models integrate joint impact of various barriers on ocular pharmacokinetics (PKs) thereby helping drug development. The models are useful in describing (top-down) and predicting (bottom-up) pharmacokinetics of ocular drugs. This is useful also in the design and development of new drug molecules and drug delivery systems. Furthermore, the models can be used for interspecies translation and probing of disease effects on pharmacokinetics. In this lecture, ocular pharmacokinetics and current modelling methods (noncompartmental analyses, compartmental, physiologically based, and finite element models) are introduced. Future challenges are also highlighted (e.g. intra-tissue distribution, prediction of drug responses, active transport).
An inconvenient truth: pathophysiological remodeling of the inner retina in photoreceptor degeneration
Photoreceptor loss is the primary cause behind vision impairment and blindness in diseases such as retinitis pigmentosa and age-related macular degeneration. However, the death of rods and cones allows retinoids to permeate the inner retina, causing retinal ganglion cells to become spontaneously hyperactive, severely reducing the signal-to-noise ratio, and creating interference in the communication between the surviving retina and the brain. Treatments aimed at blocking or reducing hyperactivity improve vision initiated from surviving photoreceptors and could enhance the signal fidelity generated by vision restoration methodologies.
Mapping the neural dynamics of dominance and defeat
Social experiences can have lasting changes on behavior and affective state. In particular, repeated wins and losses during fighting can facilitate and suppress future aggressive behavior, leading to persistent high aggression or low aggression states. We use a combination of techniques for multi-region neural recording, perturbation, behavioral analysis, and modeling to understand how nodes in the brain’s subcortical “social decision-making network” encode and transform aggressive motivation into action, and how these circuits change following social experience.
Screen Savers : Protecting adolescent mental health in a digital world
In our rapidly evolving digital world, there is increasing concern about the impact of digital technologies and social media on the mental health of young people. Policymakers and the public are nervous. Psychologists are facing mounting pressures to deliver evidence that can inform policies and practices to safeguard both young people and society at large. However, research progress is slow while technological change is accelerating.My talk will reflect on this, both as a question of psychological science and metascience. Digital companies have designed highly popular environments that differ in important ways from traditional offline spaces. By revisiting the foundations of psychology (e.g. development and cognition) and considering digital changes' impact on theories and findings, we gain deeper insights into questions such as the following. (1) How do digital environments exacerbate developmental vulnerabilities that predispose young people to mental health conditions? (2) How do digital designs interact with cognitive and learning processes, formalised through computational approaches such as reinforcement learning or Bayesian modelling?However, we also need to face deeper questions about what it means to do science about new technologies and the challenge of keeping pace with technological advancements. Therefore, I discuss the concept of ‘fast science’, where, during crises, scientists might lower their standards of evidence to come to conclusions quicker. Might psychologists want to take this approach in the face of technological change and looming concerns? The talk concludes with a discussion of such strategies for 21st-century psychology research in the era of digitalization.
The Brain Prize winners' webinar
This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.
Sensory cognition
This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.
Brain-on-a-Chip: Advanced In Vitro Platforms for Drug Screening and Disease Modeling
Unmotivated bias
In this talk, I will explore how social affective biases arise even in the absence of motivational factors as an emergent outcome of the basic structure of social learning. In several studies, we found that initial negative interactions with some members of a group can cause subsequent avoidance of the entire group, and that this avoidance perpetuates stereotypes. Additional cognitive modeling discovered that approach and avoidance behavior based on biased beliefs not only influences the evaluative (positive or negative) impressions of group members, but also shapes the depth of the cognitive representations available to learn about individuals. In other words, people have richer cognitive representations of members of groups that are not avoided, akin to individualized vs group level categories. I will end presenting a series of multi-agent reinforcement learning simulations that demonstrate the emergence of these social-structural feedback loops in the development and maintenance of affective biases.
Contribution of computational models of reinforcement learning to neurosciences/ computational modeling, reward, learning, decision-making, conditioning, navigation, dopamine, basal ganglia, prefrontal cortex, hippocampus
Beyond Homogeneity: Characterizing Brain Disorder Heterogeneity through EEG and Normative Modeling
Electroencephalography (EEG) has been thoroughly studied for decades in psychiatry research. Yet its integration into clinical practice as a diagnostic/prognostic tool remains unachieved. We hypothesize that a key reason is the underlying patient's heterogeneity, overlooked in psychiatric EEG research relying on a case-control approach. We combine HD-EEG with normative modeling to quantify this heterogeneity using two well-established and extensively investigated EEG characteristics -spectral power and functional connectivity- across a cohort of 1674 patients with attention-deficit/hyperactivity disorder, autism spectrum disorder, learning disorder, or anxiety, and 560 matched controls. Normative models showed that deviations from population norms among patients were highly heterogeneous and frequency-dependent. Deviation spatial overlap across patients did not exceed 40% and 24% for spectral and connectivity, respectively. Considering individual deviations in patients has significantly enhanced comparative analysis, and the identification of patient-specific markers has demonstrated a correlation with clinical assessments, representing a crucial step towards attaining precision psychiatry through EEG.
Why age-related macular degeneration is a mathematically tractable disease
Among all prevalent diseases with a central neurodegeneration, AMD can be considered the most promising in terms of prevention and early intervention, due to several factors surrounding the neural geometry of the foveal singularity. • Steep gradients of cell density, deployed in a radially symmetric fashion, can be modeled with a difference of Gaussian curves. • These steep gradients give rise to huge, spatially aligned biologic effects, summarized as the Center of Cone Resilience, Surround of Rod Vulnerability. • Widely used clinical imaging technology provides cellular and subcellular level information. • Data are now available at all timelines: clinical, lifespan, evolutionary • Snapshots are available from tissues (histology, analytic chemistry, gene expression) • A viable biogenesis model exists for drusen, the largest population-level intraocular risk factor for progression. • The biogenesis model shares molecular commonality with atherosclerotic cardiovascular disease, for which there has been decades of public health success. • Animal and cell model systems are emerging to test these ideas.
Updating our models of the basal ganglia using advances in neuroanatomy and computational modeling
Generative models for video games (rescheduled)
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Modelling the fruit fly brain and body
Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.
Generative models for video games
Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.
Modeling human brain development and disease: the role of primary cilia
Neurodevelopmental disorders (NDDs) impose a global burden, affecting an increasing number of individuals. While some causative genes have been identified, understanding the human-specific mechanisms involved in these disorders remains limited. Traditional gene-driven approaches for modeling brain diseases have failed to capture the diverse and convergent mechanisms at play. Centrosomes and cilia act as intermediaries between environmental and intrinsic signals, regulating cellular behavior. Mutations or dosage variations disrupting their function have been linked to brain formation deficits, highlighting their importance, yet their precise contributions remain largely unknown. Hence, we aim to investigate whether the centrosome/cilia axis is crucial for brain development and serves as a hub for human-specific mechanisms disrupted in NDDs. Towards this direction, we first demonstrated species-specific and cell-type-specific differences in the cilia-genes expression during mouse and human corticogenesis. Then, to dissect their role, we provoked their ectopic overexpression or silencing in the developing mouse cortex or in human brain organoids. Our findings suggest that cilia genes manipulation alters both the numbers and the position of NPCs and neurons in the developing cortex. Interestingly, primary cilium morphology is disrupted, as we find changes in their length, orientation and number that lead to disruption of the apical belt and altered delamination profiles during development. Our results give insight into the role of primary cilia in human cortical development and address fundamental questions regarding the diversity and convergence of gene function in development and disease manifestation. It has the potential to uncover novel pharmacological targets, facilitate personalized medicine, and improve the lives of individuals affected by NDDs through targeted cilia-based therapies.
Modeling the fruit fly brain and body
Learning representations of specifics and generalities over time
There is a fundamental tension between storing discrete traces of individual experiences, which allows recall of particular moments in our past without interference, and extracting regularities across these experiences, which supports generalization and prediction in similar situations in the future. One influential proposal for how the brain resolves this tension is that it separates the processes anatomically into Complementary Learning Systems, with the hippocampus rapidly encoding individual episodes and the neocortex slowly extracting regularities over days, months, and years. But this does not explain our ability to learn and generalize from new regularities in our environment quickly, often within minutes. We have put forward a neural network model of the hippocampus that suggests that the hippocampus itself may contain complementary learning systems, with one pathway specializing in the rapid learning of regularities and a separate pathway handling the region’s classic episodic memory functions. This proposal has broad implications for how we learn and represent novel information of specific and generalized types, which we test across statistical learning, inference, and category learning paradigms. We also explore how this system interacts with slower-learning neocortical memory systems, with empirical and modeling investigations into how the hippocampus shapes neocortical representations during sleep. Together, the work helps us understand how structured information in our environment is initially encoded and how it then transforms over time.
Modeling idiosyncratic evaluation of faces
Reimagining the neuron as a controller: A novel model for Neuroscience and AI
We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.
Measures and models of multisensory integration in reaction times
First, a new measure of MI for reaction times is proposed that takes the entire RT distribution into account. Second, we present some recent developments in TWIN modeling, including a new proposal for the sound-induced flash illusion (SIFI).
Modeling Primate Vision (and Language)
Modeling the Navigational Circuitry of the Fly
Navigation requires orienting oneself relative to landmarks in the environment, evaluating relevant sensory data, remembering goals, and convert all this information into motor commands that direct locomotion. I will present models, highly constrained by connectomic, physiological and behavioral data, for how these functions are accomplished in the fly brain.
Bio-realistic multiscale modeling of cortical circuits
A central question in neuroscience is how the structure of brain circuits determines their activity and function. To explore this systematically, we developed a 230,000-neuron model of mouse primary visual cortex (area V1). The model integrates a broad array of experimental data:Distribution and morpho-electric properties of different neuron types in V1.
Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer
Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916
Mathematical and computational modelling of ocular hemodynamics: from theory to applications
Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.
Virtual Brain Twins for Brain Medicine and Epilepsy
Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.
Metabolic Remodelling in the Developing Forebrain in Health and Disease
Little is known about the critical metabolic changes that neural cells have to undergo during development and how temporary shifts in this program can influence brain circuitries and behavior. Motivated by the identification of autism-associated mutations in SLC7A5, a transporter for metabolically essential large neutral amino acids (LNAAs), we utilized metabolomic profiling to investigate the metabolic states of the cerebral cortex across various developmental stages. Our findings reveal significant metabolic restructuring occurring in the forebrain throughout development, with specific groups of metabolites exhibiting stage-specific changes. Through the manipulation of Slc7a5 expression in neural cells, we discovered an interconnected relationship between the metabolism of LNAAs and lipids within the cortex. Neuronal deletion of Slc7a5 influences the postnatal metabolic state, resulting in a shift in lipid metabolism and a cell-type-specific modification in neuronal activity patterns. This ultimately gives rise to enduring circuit dysfunction.
NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping
We will discuss this paper on Neuroquery, a relatively new web-based meta-analysis tool: https://elifesciences.org/articles/53385.pdf. This is different from Neurosynth in that it generates meta-analysis maps using predictive modeling from the string of text provided at the prompt, instead of performing inferential statistics to calculate the overlap of activation from different studies. This allows the user to generate predictive maps for more nuanced cognitive processes - especially for clinical populations which may be underrepresented in the literature compared to controls - and can be useful in generating predictions about where the activity will be for one's own study, and for creating ROIs.
Computational and mathematical approaches to myopigenesis
Myopia is predicted to affect 50% of all people worldwide by 2050, and is a risk factor for significant, potentially blinding ocular pathologies, such as retinal detachment and glaucoma. Thus, there is significant motivation to better understand the process of myopigenesis and to develop effective anti-myopigenic treatments. In nearly all cases of human myopia, scleral remodeling is an obligate step in the axial elongation that characterizes the condition. Here I will describe the development of a biomechanical assay based on transient unconfined compression of scleral samples. By treating the scleral as a poroelastic material, one can determine scleral biomechanical properties from extremely small samples, such as obtained from the mouse eye. These properties provide proxy measures of scleral remodeling, and have allowed us to identify all-trans retinoic acid (atRA) as a myopigenic stimulus in mice. I will also describe nascent collaborative work on modeling the transport of atRA in the eye.
Microbial modulation of zebrafish behavior and brain development
There is growing recognition that host-associated microbiotas modulate intrinsic neurodevelopmental programs including those underlying human social behavior. Despite this awareness, the fundamental processes are generally not understood. We discovered that the zebrafish microbiota is necessary for normal social behavior. By examining neuronal correlates of behavior, we found that the microbiota restrains neurite complexity and targeting of key forebrain neurons within the social behavior circuitry. The microbiota is also necessary for both localization and molecular functions of forebrain microglia, brain-resident phagocytes that remodel neuronal arbors. In particular, the microbiota promotes expression of complement signaling pathway components important for synapse remodeling. Our work provides evidence that the microbiota modulates zebrafish social behavior by stimulating microglial remodeling of forebrain circuits during early neurodevelopment and suggests molecular pathways for therapeutic interventions during atypical neurodevelopment.
Off-policy learning in the basal ganglia
I will discuss work with Jack Lindsey modeling reinforcement learning for action selection in the basal ganglia. I will argue that the presence of multiple brain regions, in addition to the basal ganglia, that contribute to motor control motivates the need for an off-policy basal ganglia learning algorithm. I will then describe a biological implementation of such an algorithm that predicts tuning of dopamine neurons to a quantity we call "action surprise," in addition to reward prediction error. In the same model, an implementation of learning from a motor efference copy also predicts a novel solution to the problem of multiplexing feedforward and efference-related striatal activity. The solution exploits the difference between D1 and D2-expressing medium spiny neurons and leads to predictions about striatal dynamics.
Computational models and experimental methods for the human cornea
The eye is a multi-component biological system, where mechanics, optics, transport phenomena and chemical reactions are strictly interlaced, characterized by the typical bio-variability in sizes and material properties. The eye’s response to external action is patient-specific and it can be predicted only by a customized approach, that accounts for the multiple physics and for the intrinsic microstructure of the tissues, developed with the aid of forefront means of computational biomechanics. Our activity in the last years has been devoted to the development of a comprehensive model of the cornea that aims at being entirely patient-specific. While the geometrical aspects are fully under control, given the sophisticated diagnostic machinery able to provide a fully three-dimensional images of the eye, the major difficulties are related to the characterization of the tissues, which require the setup of in-vivo tests to complement the well documented results of in-vitro tests. The interpretation of in-vivo tests is very complex, since the entire structure of the eye is involved and the characterization of the single tissue is not trivial. The availability of micromechanical models constructed from detailed images of the eye represents an important support for the characterization of the corneal tissues, especially in the case of pathologic conditions. In this presentation I will provide an overview of the research developed in our group in terms of computational models and experimental approaches developed for the human cornea.
Face and voice perception as a tool for characterizing perceptual decisions and metacognitive abilities across the general population and psychosis spectrum
Humans constantly make perceptual decisions on human faces and voices. These regularly come with the challenge of receiving only uncertain sensory evidence, resulting from noisy input and noisy neural processes. Efficiently adapting one’s internal decision system including prior expectations and subsequent metacognitive assessments to these challenges is crucial in everyday life. However, the exact decision mechanisms and whether these represent modifiable states remain unknown in the general population and clinical patients with psychosis. Using data from a laboratory-based sample of healthy controls and patients with psychosis as well as a complementary, large online sample of healthy controls, I will demonstrate how a combination of perceptual face and voice recognition decision fidelity, metacognitive ratings, and Bayesian computational modelling may be used as indicators to differentiate between non-clinical and clinical states in the future.
Assigning credit through the "other” connectome
Learning in neural networks requires assigning the right values to thousands to trillions or more of individual connections, so that the network as a whole produces the desired behavior. Neuroscientists have gained insights into this “credit assignment” problem through decades of experimental, modeling, and theoretical studies. This has suggested key roles for synaptic eligibility traces and top-down feedback signals, among other factors. Here we study the potential contribution of another type of signaling that is being revealed in greater and greater fidelity by ongoing molecular and genomics studies. This is the set of modulatory pathways local to a given circuit, which form an intriguing second type of connectome overlayed on top of synaptic connectivity. We will share ongoing modeling and theoretical work that explores the possible roles of this local modulatory connectome in network learning.
Analogical Reasoning and Generalization for Interactive Task Learning in Physical Machines
Humans are natural teachers; learning through instruction is one of the most fundamental ways that we learn. Interactive Task Learning (ITL) is an emerging research agenda that studies the design of complex intelligent robots that can acquire new knowledge through natural human teacher-robot learner interactions. ITL methods are particularly useful for designing intelligent robots whose behavior can be adapted by humans collaborating with them. In this talk, I will summarize our recent findings on the structure that human instruction naturally has and motivate an intelligent system design that can exploit their structure. The system – AILEEN – is being developed using the common model of cognition. Architectures that implement the Common Model of Cognition - Soar, ACT-R, and Sigma - have a prominent place in research on cognitive modeling as well as on designing complex intelligent agents. However, they miss a critical piece of intelligent behavior – analogical reasoning and generalization. I will introduce a new memory – concept memory – that integrates with a common model of cognition architecture and supports ITL.
From cells to systems: multiscale studies of the epileptic brain
It is increasingly recognized that epilepsy affects human brain organization across multiple scales, ranging from cellular alterations in specific regions towards macroscale network imbalances. My talk will overview an emerging paradigm that integrates cellular, neuroimaging, and network modelling approaches to faithful characterize the extent of structural and functional alterations in the common epilepsies. I will also discuss how multiscale framework can help to derive clinically useful biomarkers of dysfunction, and how these methods may guide surgical planning and prognostics.
Unique features of oxygen delivery to the mammalian retina
Like all neural tissue, the retina has a high metabolic demand, and requires a constant supply of oxygen. Second and third order neurons are supplied by the retinal circulation, whose characteristics are similar to brain circulation. However, the photoreceptor region, which occupies half of the retinal thickness, is avascular, and relies on diffusion of oxygen from the choroidal circulation, whose properties are very different, as well as the retinal circulation. By fitting diffusion models to oxygen measurements made with oxygen microelectrodes, it is possible to understand the relative roles of the two circulations under normal conditions of light and darkness, and what happens if the retina is detached or the retinal circulation is occluded. Most of this work has been done in vivo in rat, cat, and monkey, but recent work in the isolated mouse retina will also be discussed.
Predictive modeling, cortical hierarchy, and their computational implications
Predictive modeling and dimensionality reduction of functional neuroimaging data have provided rich information about the representations and functional architectures of the human brain. While these approaches have been effective in many cases, we will discuss how neglecting the internal dynamics of the brain (e.g., spontaneous activity, global dynamics, effective connectivity) and its underlying computational principles may hinder our progress in understanding and modeling brain functions. By reexamining evidence from our previous and ongoing work, we will propose new hypotheses and directions for research that consider both internal dynamics and the computational principles that may govern brain processes.
Modeling shared and variable information encoded in fine-scale cortical topographies
Information is encoded in fine-scale functional topographies that vary from brain to brain. Hyperalignment models information that is shared across brain in a high-dimensional common information space. Hyperalignment transformations project idiosyncratic individual topographies into the common model information space. These transformations contain topographic basis functions, affording estimates of how shared information in the common model space is instantiated in the idiosyncratic functional topographies of individual brains. This new model of the functional organization of cortex – as multiplexed, overlapping basis functions – captures the idiosyncratic conformations of both coarse-scale topographies, such as retinotopy and category-selectivity, and fine-scale topographies. Hyperalignment also makes it possible to investigate how information that is encoded in fine-scale topographies differs across brains. These individual differences in fine-grained cortical function were not accessible with previous methods.
Inflammation and Pregancy
Talk(1): Fetal and maternal NLRP3 signaling is required for preterm labor and birth. (DOI: 10.1172/jci.insight.158238) Talk(2): Maternal IL-33 critically regulates tissue remodeling and type 2 immune responses in the uterus during early pregnancy in mice (DOI: 10.1073/pnas.2123267119)
Can a single neuron solve MNIST? Neural computation of machine learning tasks emerges from the interaction of dendritic properties
Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how qualitative aspects of a dendritic tree, such as its branched morphology, its repetition of presynaptic inputs, voltage-gated ion channels, electrical properties and complex synapses, determine neural computation beyond this apparent nonlinearity. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these qualitative biological constraints. Here we simulate multi-layer neural network models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by most of these constraints and may synergistically benefit from all of them combined. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks through the emergent capabilities afforded by their properties.
Network inference via process motifs for lagged correlation in linear stochastic processes
A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.
Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.
AI-assisted language learning: Assessing learners who memorize and reason by analogy
Vocabulary learning applications like Duolingo have millions of users around the world, but yet are based on very simple heuristics to choose teaching material to provide to their users. In this presentation, we will discuss the possibility to develop more advanced artificial teachers, which would be based on modeling of the learner’s inner characteristics. In the case of teaching vocabulary, understanding how the learner memorizes is enough. When it comes to picking grammar exercises, it becomes essential to assess how the learner reasons, in particular by analogy. This second application will illustrate how analogical and case-based reasoning can be employed in an alternative way in education: not as the teaching algorithm, but as a part of the learner’s model.
Hidden nature of seizures
How seizures emerge from the abnormal dynamics of neural networks within the epileptogenic tissue remains an enigma. Are seizures random events, or do detectable changes in brain dynamics precede them? Are mechanisms of seizure emergence identical at the onset and later stages of epilepsy? Is the risk of seizure occurrence stable, or does it change over time? A myriad of questions about seizure genesis remains to be answered to understand the core principles governing seizure genesis. The last decade has brought unprecedented insights into the complex nature of seizure emergence. It is now believed that seizure onset represents the product of the interactions between the process of a transition to seizure, long-term fluctuations in seizure susceptibility, epileptogenesis, and disease progression. During the lecture, we will review the latest observations about mechanisms of ictogenesis operating at multiple temporal scales. We will show how the latest observations contribute to the formation of a comprehensive theory of seizure genesis, and challenge the traditional perspectives on ictogenesis. Finally, we will discuss how combining conventional approaches with computational modeling, modern techniques of in vivo imaging, and genetic manipulation open prospects for exploration of yet hidden mechanisms of seizure genesis.
Building System Models of Brain-Like Visual Intelligence with Brain-Score
Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I argue that it is time for our field to take the next step: build system models that capture a range of visual intelligence behaviors along with the underlying neural mechanisms. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural (spike rates) and behavioral experiments in the primate visual ventral stream. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover that models’ brain scores are predicted by their object categorization performance (up to 70% ImageNet accuracy). Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and early visual processing to predict primate temporal processing and become more robust, and require fewer supervised synaptic updates. Taken together, these integrative benchmarks and system models are first steps to modeling the complexities of brain processing in an entire domain of intelligence.
General purpose event-based architectures for deep learning
Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST
Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex
New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.
Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions
Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials), as well as non confidence-specific sequential effects. Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.
Computational Imaging: Augmenting Optics with Algorithms for Biomedical Microscopy and Neural Imaging
Computational imaging seeks to achieve novel capabilities and overcome conventional limitations by combining optics and algorithms. In this seminar, I will discuss two computational imaging technologies developed in Boston University Computational Imaging Systems lab, including Intensity Diffraction Tomography and Computational Miniature Mesoscope. In our intensity diffraction tomography system, we demonstrate 3D quantitative phase imaging on a simple LED array microscope. We develop both single-scattering and multiple-scattering models to image complex biological samples. In our Computational Miniature Mesoscope, we demonstrate single-shot 3D high-resolution fluorescence imaging across a wide field-of-view in a miniaturized platform. We develop methods to characterize 3D spatially varying aberrations and physical simulator-based deep learning strategies to achieve fast and accurate reconstructions. Broadly, I will discuss how synergies between novel optical instrumentation, physical modeling, and model- and learning-based computational algorithms can push the limits in biomedical microscopy and neural imaging.
Invariant neural subspaces maintained by feedback modulation
This session is a double feature of the Cologne Theoretical Neuroscience Forum and the Institute of Neuroscience and Medicine (INM-6) Computational and Systems Neuroscience of the Jülich Research Center.
Successes and failures of current AI as a model of visual cognition
From Computation to Large-scale Neural Circuitry in Human Belief Updating
Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.
Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation
Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. We propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity or spontaneous synaptic turnover induce neuron exchange. The exchange can be described analytically by reduced, random walk models derived from spiking neural network dynamics or from first principles. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.
Feedforward and feedback processes in visual recognition
Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.
Adolescent maturation of cortical excitation-inhibition balance based on individualized biophysical network modeling
Bernstein Conference 2024
Mechanistic modeling of Drosophila neural population codes in natural social communication
COSYNE 2022
Mechanistic modeling of Drosophila neural population codes in natural social communication
COSYNE 2022
Modeling the orbitofrontal cortex function in navigation through an RL-RNN implementation
COSYNE 2023
Hard to digest: The challenges of modeling the dynamics of the C. elegans pharynx
FENS Forum 2024
Method for 3D quantitative analysis of enteric nervous system remodeling in mouse and human gut tissues
FENS Forum 2024
PTCHD1 modulates cytoskeleton remodeling through regulation of Rac1-PAK signaling pathway, consistent with neurodevelopmental disorders phenotype
FENS Forum 2024
Controlled sampling of non-equilibrium brain dynamics: modeling and estimation from neuroimaging signals
Bernstein Conference 2024
cuBNM: GPU-Accelerated Biophysical Network Modeling
Bernstein Conference 2024
Deep generative networks as a computational approach for global non-linear control modeling in the nematode C. elegans
Bernstein Conference 2024
Deep inverse modeling reveals dynamic-dependent invariances in neural circuits mechanisms
Bernstein Conference 2024
A new framework for modeling innate capabilities in network with diverse types of spiking neurons: Probabilistic Skeleton
Bernstein Conference 2024
Modeling the autistic cerebellum: propagation of granule cells alteration through the granular layer microcircuit
Bernstein Conference 2024
Modeling competitive memory encoding using a Hopfield network
Bernstein Conference 2024
Modeling spatial and temporal attractive and repulsive biases in perception
Bernstein Conference 2024
Modeling HCN Channel-Mediated Modulation on Dendro-Somatic Electric Coupling in CA1 Pyramidal Cells
Bernstein Conference 2024
Modeling Decision-Making in Trajectory Extrapolation Tasks: Comparing Random Sampling Model and Multi-Layer Perceptron Approaches
Bernstein Conference 2024
Modeling gait dynamics with switching non-linear dynamical systems
Bernstein Conference 2024
Quantitative modeling of the emergence of macroscopic grid-like representations
Bernstein Conference 2024
Rapid prototyping in spiking neural network modeling with NESTML and NEST Desktop
Bernstein Conference 2024
Task choice influences single-neuron tuning predictions in connectome-constrained modeling
Bernstein Conference 2024
Deep neural network modeling of a visually-guided social behavior
COSYNE 2022
Modeling the formation of the visual hierarchy
COSYNE 2022
Modeling Hippocampal Spatial Learning Through a Valence-based Interplay of Dopamine and Serotonin
COSYNE 2022
Modeling the formation of the visual hierarchy
COSYNE 2022
Modeling Hippocampal Spatial Learning Through a Valence-based Interplay of Dopamine and Serotonin
COSYNE 2022
Modeling multi-region neural communication during decision making with recurrent switching dynamical systems
COSYNE 2022
Modeling multi-region neural communication during decision making with recurrent switching dynamical systems
COSYNE 2022
Modeling and optimization for neuromodulation in spinal cord stimulation
COSYNE 2022
Modeling and optimization for neuromodulation in spinal cord stimulation
COSYNE 2022
Modeling tutor-directed dynamics in zebra finch song learning
COSYNE 2022
Modeling tutor-directed dynamics in zebra finch song learning
COSYNE 2022
Multiscale Hierarchical Modeling Framework For Fully Mapping a Social Interaction
COSYNE 2022
Multiscale Hierarchical Modeling Framework For Fully Mapping a Social Interaction
COSYNE 2022
Online neural modeling and Bayesian optimization for closed-loop adaptive experiments
COSYNE 2022
Online neural modeling and Bayesian optimization for closed-loop adaptive experiments
COSYNE 2022
Semi-supervised sequence modeling for improved behavior segmentation
COSYNE 2022
Semi-supervised sequence modeling for improved behavior segmentation
COSYNE 2022
“Attentional fingerprints” in conceptual space: Reliable, individuating patterns of visual attention revealed using natural language modeling
COSYNE 2023
The cost of behavioral flexibility: a modeling study of reversal learning using a spiking neural network
Bernstein Conference 2024