TopicWorld Wide

modeling

60 Seminars40 ePosters12 Positions

Pick a domain context

This cross-domain view is for discovery. Choose a domain-scoped topic page for the canonical URL.

PositionComputational Neuroscience

University of Chicago - Grossman Center for Quantitative Biology and Human Behavior

University of Chicago
Chicago, USA
Jan 4, 2026

The Grossman Center for Quantitative Biology and Human Behavior at the University of Chicago seeks outstanding applicants for multiple postdoctoral positions in computational and theoretical neuroscience.

Position

Tansu Celikel

Georgia Institute of Technology
Atlanta, Georgia - USA
Jan 4, 2026

The School of Psychology (psychology.gatech.edu/) at the GEORGIA INSTITUTE OF TECHNOLOGY (www.gatech.edu/) invites nominations and applications for 5 open-rank tenure-track faculty positionswith an anticipated start date of August 2023 or later. The successful applicant will be expected to demonstrate and develop an exceptional research program. The research area is open, but we are particularly interested in candidates whose scholarship complements existing School strengths in Adult Development and Aging, Cognition and Brain Science, Engineering Psychology, Work and Organizational Psychology, and Quantitative Psychology, and takes advantage of quantitative, mathematical, and/or computational methods. The School of Psychology is well-positioned in the College of Sciences at Georgia Tech, a University that promotes translational research from the laboratory and field to real-world applications in a variety of areas. The School offers multidisciplinary educational programs, graduate training, and research opportunities in the study of mind, brain, and behavior and the associated development of technologies that can improve human experience. Excellent research facilities support the School’s research and interdisciplinary graduate programs across the Institute. Georgia Tech’s commitment to interdisciplinary collaboration has fostered fruitful interactions between psychology faculty and faculty in the sciences, computing, business, engineering, design, and liberal arts. Located in the heart of Atlanta, one of the nation's most academic, entrepreneurial, creative and diverse cities with excellent quality of life, the School actively develops and maintains a rich network of academic and applied behavioral science/industrial partnerships in and beyond Atlanta. Candidates whose research programs foster collaborative interactions with other members of the School and further contribute to bridge-building with other academic and research units at Tech and industries are particularly encouraged to apply. Applications can be submitted online (bit.ly/Join-us-at-GT-Psych) and should include a Cover Letter, Curriculum Vitae (including a list of publications), Research Statement, Teaching Statement, DEI (diversity, equity, and inclusion) statement, and contact information of at least three individuals who have agreed to provide a reference in support of the application if asked. Evaluation of applications will begin October 10th, 2022 and continue until all positions are filled. Questions about this search can be addressed to faculty_search@psych.gatech.edu. Portal questions will be answered by Tikica Platt, the School’s HR director, and questions about positions by the co-chairs of the search committee, Ruth Kanfer and Tansu Celikel.

PositionComputational Neuroscience

Prof Geoff Goodhill

Washington University in St Louis
St Louis, USA
Jan 4, 2026

A new NIH-funded collaboration between David Prober (Caltech), Thai Truong (USC) and Geoff Goodhill (Washington University in St Louis) aims to gain new insight into the neural circuits underlying sleep, through a combination of whole-brain neural recordings in zebrafish and theoretical/computational modeling. The Goodhill lab is now looking for 2 postdocs for the modeling and computational analysis components. Using novel 2-photon imaging technologies Prober and Truong will record from the entire larval zebrafish brain at single-neuron resolution continuously for long periods of time, examining neural circuit activity during normal day-night cycles and in response to genetic and pharmacological perturbations. The Goodhill lab will analyze the resulting huge datasets using a variety of sophisticated computational approaches, and use these results to build new theoretical models that reveal how neural circuits interact to govern sleep. Theoretical and experimental work will be intimately linked.

PositionComputational Biology

Navin Pokala

New York Institute of Technology
New York City, USA
Jan 4, 2026

The Department of Biological and Chemical Sciences at New York Institute of Technology seeks outstanding applicants for a tenure-track position at the Assistant Professor level to develop a research program in the broadly defined fields of biostatistics, bioinformatics or computational biology that complements existing research programs and carries potential to establish external collaborations. The successful candidate will teach introductory and advanced courses in the biological sciences at the undergraduate level, notably Biostatistics. The Department has undergraduate programs in Biology, Chemistry, and Biotechnology at the New York City and Long Island (Old Westbury) campuses. New York Tech emphasizes interdisciplinary scholarship, research, and teaching. Department faculty research interests are diverse, including medicinal and organic chemistry, neuroscience, cell and molecular biology, genetics, biochemistry, microbiology, computational chemistry, and analytical chemistry. Faculty in the Department have ample opportunity to collaborate with faculty at the New York Tech’s College of Engineering and Computer Sciences and College of Osteopathic Medicine.

Position

Dr. Simon Danner

Drexel University College of Medicine
Philadelphia, PA, USA
Jan 4, 2026

A Postdoctoral Fellow/Research Associate position is available in Dr. Simon Danner’s laboratory at the Department of Neurobiology and Anatomy, Drexel University College of Medicine to study the spinal locomotor circuitry and its interactions with the musculoskeletal system and afferent feedback. The qualified postdoc will work on several collaborative, interdisciplinary, NIH-funded projects to uncover the connectivity and function of somatosensory afferents and various genetically or anatomically identified interneurons. The studies involve the development of computer models of mouse, rat, and cat biomechanics connected with models of the spinal locomotor circuitry. The successful candidate will closely collaborate with other computational and experimental neuroscientists: they will use experimental data to implement and refine the model, and use the model to derive predictions that will then be tested experimentally by our collaborators. Essential Functions: • Work with existing and develop new biomechanical models of the mouse, rat and cat • Develop neural network models of the spinal locomotor circuits • Integrate the neural network and biomechanical models to simulate locomotor behavior • Use numerical optimization to optimize the neuromechanical models • Apply machine learning/reinforcement learning • Use the models to derive experimentally testable predictions • Closely collaborate with experimental neuroscientists • Analyze kinematic and electrophysiological data • Write and submit research manuscripts • Present novel findings at national and international conferences The qualified candidate will benefit from joining a well-funded research group that works in a dynamic, collaborative and interdisciplinary environment. The highly collegial Danner lab is a member of the Neuroengineering Program, the Theoretical & Computational Neuroscience group, and the Spinal Cord Research Center within Drexel University College of Medicine’s Department of Neurobiology and Anatomy (http://drexel.edu/medicine/About/Departments/Neurobiology-Anatomy/) in Philadelphia, PA. The Department provides an outstanding scientific environment for multidisciplinary training. Interactions and collaborations between labs and between other departments are encouraged.

Position

Marwen Belkaid

ETIS Laboratory (CNRS UMR8051) of CY Cergy Paris University
Paris region
Jan 4, 2026

A thesis offer in neurorobotics is available on the topic of modeling affective processes for visual attention, decision-making and social behaviors. The thesis project will be carried out at the ETIS Laboratory (CNRS UMR8051) of CY Cergy Paris University, located in the Paris region.

PositionNeuroscience

Geoffrey J Goodhill

Washington University School of Medicine
St. Louis, MO
Jan 4, 2026

An NIH-funded collaboration between David Prober (Caltech), Thai Truong (USC) and Geoff Goodhill (Washington University in St Louis) aims to gain new insight into the neural circuits underlying sleep, through a combination of whole-brain neural recordings in zebrafish and theoretical/computational modeling. A postdoc position is available in the Goodhill lab to contribute to the modeling and computational analysis components. Using novel 2-photon imaging technologies Prober and Truong are recording from the entire larval zebrafish brain at single-neuron resolution continuously for long periods of time, examining neural circuit activity during normal day-night cycles and in response to genetic and pharmacological perturbations. The Goodhill lab is analyzing the resulting huge datasets using a variety of sophisticated computational approaches, and using these results to build new theoretical models that reveal how neural circuits interact to govern sleep.

Position

Chloé Bourgeois-Antonini

Institut NeuroMod, Université Côte d’Azur
Université Côte d’Azur, Nice, France
Jan 4, 2026

The M.Sc. Mod4NeuCog is a two-year interdisciplinary master's program at Université Côte d’Azur (Nice, France), which aims to train active researchers at the crossroads of computer science, applied mathematics and cognitive neuroscience. Students will learn to model cognitive functions using mathematical and computational tools and will be specialized in computational neuro/cognitive science, able to work in fully interdisciplinary settings, with a strong foundation in mathematics.

PositionComputational Neuroscience

Prof. Erik De Schutter

Computational Neuroscience Unit, Okinawa Institute of Science and Technology
Okinawa Institute of Science and Technology
Jan 4, 2026

A postdoctoral position is available in the Computational Neuroscience Unit (https://groups.oist.jp/cnu) of Prof. Erik De Schutter at the Okinawa Institute of Science and Technology for a researcher interested in modeling to help better understanding of cerebellar properties and function. Candidates should have good knowledge of cerebellar anatomy and physiology related to previous modeling work, and be open to an explorative approach. Depending on prior experience and interest the focus can be on modeling the cerebellum and its neurons and/or on analyzing experimental data obtained through collaboration. The postdoc will interact with other researchers and students in the lab who are working on cerebellar modeling projects. We offer attractive working conditions in an English language graduate university, located on a beautiful subtropical island. Starting date any time before end 2024. Send curriculum vitae, summary of research interests and experience, and the names of two referees to Prof. Erik De Schutter at erik@oist.jp

SeminarNeuroscience

OpenNeuro FitLins GLM: An Accessible, Semi-Automated Pipeline for OpenNeuro Task fMRI Analysis

Michael Demidenko
Stanford University
Aug 1, 2025

In this talk, I will discuss the OpenNeuro Fitlins GLM package and provide an illustration of the analytic workflow. OpenNeuro FitLins GLM is a semi-automated pipeline that reduces barriers to analyzing task-based fMRI data from OpenNeuro's 600+ task datasets. Created for psychology, psychiatry and cognitive neuroscience researchers without extensive computational expertise, this tool automates what is largely a manual process and compilation of in-house scripts for data retrieval, validation, quality control, statistical modeling and reporting that, in some cases, may require weeks of effort. The workflow abides by open-science practices, enhancing reproducibility and incorporates community feedback for model improvement. The pipeline integrates BIDS-compliant datasets and fMRIPrep preprocessed derivatives, and dynamically creates BIDS Statistical Model specifications (with Fitlins) to perform common mass univariate [GLM] analyses. To enhance and standardize reporting, it generates comprehensive reports which includes design matrices, statistical maps and COBIDAS-aligned reporting that is fully reproducible from the model specifications and derivatives. OpenNeuro Fitlins GLM has been tested on over 30 datasets spanning 50+ unique fMRI tasks (e.g., working memory, social processing, emotion regulation, decision-making, motor paradigms), reducing analysis times from weeks to hours when using high-performance computers, thereby enabling researchers to conduct robust single-study, meta- and mega-analyses of task fMRI data with significantly improved accessibility, standardized reporting and reproducibility.

SeminarNeuroscience

Neurobiological constraints on learning: bug or feature?

Cian O’Donell
Ulster University
Jun 11, 2025

Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.

SeminarArtificial IntelligenceRecording

Computational modelling of ocular pharmacokinetics

Arto Urtti
School of Pharmacy, University of Eastern Finland
Apr 22, 2025

Pharmacokinetics in the eye is an important factor for the success of ocular drug delivery and treatment. Pharmacokinetic features determine the feasible routes of drug administration, dosing levels and intervals, and it has impact on eventual drug responses. Several physical, biochemical, and flow-related barriers limit drug exposure of anterior and posterior ocular target tissues during treatment during local (topical, subconjunctival, intravitreal) and systemic administration (intravenous, per oral). Mathematical models integrate joint impact of various barriers on ocular pharmacokinetics (PKs) thereby helping drug development. The models are useful in describing (top-down) and predicting (bottom-up) pharmacokinetics of ocular drugs. This is useful also in the design and development of new drug molecules and drug delivery systems. Furthermore, the models can be used for interspecies translation and probing of disease effects on pharmacokinetics. In this lecture, ocular pharmacokinetics and current modelling methods (noncompartmental analyses, compartmental, physiologically based, and finite element models) are introduced. Future challenges are also highlighted (e.g. intra-tissue distribution, prediction of drug responses, active transport).

SeminarNeuroscienceRecording

An inconvenient truth: pathophysiological remodeling of the inner retina in photoreceptor degeneration

Michael Telias
University of Rochester
Apr 8, 2025

Photoreceptor loss is the primary cause behind vision impairment and blindness in diseases such as retinitis pigmentosa and age-related macular degeneration. However, the death of rods and cones allows retinoids to permeate the inner retina, causing retinal ganglion cells to become spontaneously hyperactive, severely reducing the signal-to-noise ratio, and creating interference in the communication between the surviving retina and the brain. Treatments aimed at blocking or reducing hyperactivity improve vision initiated from surviving photoreceptors and could enhance the signal fidelity generated by vision restoration methodologies.

SeminarNeuroscience

Screen Savers : Protecting adolescent mental health in a digital world

Amy Orben
University of Cambridge UK
Dec 3, 2024

In our rapidly evolving digital world, there is increasing concern about the impact of digital technologies and social media on the mental health of young people. Policymakers and the public are nervous. Psychologists are facing mounting pressures to deliver evidence that can inform policies and practices to safeguard both young people and society at large. However, research progress is slow while technological change is accelerating.My talk will reflect on this, both as a question of psychological science and metascience. Digital companies have designed highly popular environments that differ in important ways from traditional offline spaces. By revisiting the foundations of psychology (e.g. development and cognition) and considering digital changes' impact on theories and findings, we gain deeper insights into questions such as the following. (1) How do digital environments exacerbate developmental vulnerabilities that predispose young people to mental health conditions? (2) How do digital designs interact with cognitive and learning processes, formalised through computational approaches such as reinforcement learning or Bayesian modelling?However, we also need to face deeper questions about what it means to do science about new technologies and the challenge of keeping pace with technological advancements. Therefore, I discuss the concept of ‘fast science’, where, during crises, scientists might lower their standards of evidence to come to conclusions quicker. Might psychologists want to take this approach in the face of technological change and looming concerns? The talk concludes with a discussion of such strategies for 21st-century psychology research in the era of digitalization.

SeminarNeuroscience

Sensory cognition

SueYeon Chung, Srini Turaga
New York University; Janelia Research Campus
Nov 29, 2024

This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.

SeminarNeuroscience

Brain-on-a-Chip: Advanced In Vitro Platforms for Drug Screening and Disease Modeling

Pediaditakis Iosif (Sifis)
Phragma Therapeutics
Nov 21, 2024
SeminarNeuroscience

Unmotivated bias

William Cunningham
University of Toronto
Nov 12, 2024

In this talk, I will explore how social affective biases arise even in the absence of motivational factors as an emergent outcome of the basic structure of social learning. In several studies, we found that initial negative interactions with some members of a group can cause subsequent avoidance of the entire group, and that this avoidance perpetuates stereotypes. Additional cognitive modeling discovered that approach and avoidance behavior based on biased beliefs not only influences the evaluative (positive or negative) impressions of group members, but also shapes the depth of the cognitive representations available to learn about individuals. In other words, people have richer cognitive representations of members of groups that are not avoided, akin to individualized vs group level categories. I will end presenting a series of multi-agent reinforcement learning simulations that demonstrate the emergence of these social-structural feedback loops in the development and maintenance of affective biases.

SeminarNeuroscience

Contribution of computational models of reinforcement learning to neurosciences/ computational modeling, reward, learning, decision-making, conditioning, navigation, dopamine, basal ganglia, prefrontal cortex, hippocampus

Khamasi Mehdi
Centre National de la Recherche Scientifique / Sorbonne University
Nov 8, 2024
SeminarNeuroscience

Beyond Homogeneity: Characterizing Brain Disorder Heterogeneity through EEG and Normative Modeling

Mahmoud Hassan
Founder and CEO of MINDIG, Rennes, France. Adjunct professor, Reykjavik University, Reykjavik, Iceland.
Oct 9, 2024

Electroencephalography (EEG) has been thoroughly studied for decades in psychiatry research. Yet its integration into clinical practice as a diagnostic/prognostic tool remains unachieved. We hypothesize that a key reason is the underlying patient's heterogeneity, overlooked in psychiatric EEG research relying on a case-control approach. We combine HD-EEG with normative modeling to quantify this heterogeneity using two well-established and extensively investigated EEG characteristics -spectral power and functional connectivity- across a cohort of 1674 patients with attention-deficit/hyperactivity disorder, autism spectrum disorder, learning disorder, or anxiety, and 560 matched controls. Normative models showed that deviations from population norms among patients were highly heterogeneous and frequency-dependent. Deviation spatial overlap across patients did not exceed 40% and 24% for spectral and connectivity, respectively. Considering individual deviations in patients has significantly enhanced comparative analysis, and the identification of patient-specific markers has demonstrated a correlation with clinical assessments, representing a crucial step towards attaining precision psychiatry through EEG.

SeminarArtificial IntelligenceRecording

Why age-related macular degeneration is a mathematically tractable disease

Christine Curcio
The University of Alabama at Birmingham Heersink School of Medicine
Aug 19, 2024

Among all prevalent diseases with a central neurodegeneration, AMD can be considered the most promising in terms of prevention and early intervention, due to several factors surrounding the neural geometry of the foveal singularity. • Steep gradients of cell density, deployed in a radially symmetric fashion, can be modeled with a difference of Gaussian curves. • These steep gradients give rise to huge, spatially aligned biologic effects, summarized as the Center of Cone Resilience, Surround of Rod Vulnerability. • Widely used clinical imaging technology provides cellular and subcellular level information. • Data are now available at all timelines: clinical, lifespan, evolutionary • Snapshots are available from tissues (histology, analytic chemistry, gene expression) • A viable biogenesis model exists for drusen, the largest population-level intraocular risk factor for progression. • The biogenesis model shares molecular commonality with atherosclerotic cardiovascular disease, for which there has been decades of public health success. • Animal and cell model systems are emerging to test these ideas.

SeminarNeuroscience

Updating our models of the basal ganglia using advances in neuroanatomy and computational modeling

Mac Shine
University of Sydney
May 29, 2024
SeminarNeuroscience

Generative models for video games (rescheduled)

Katja Hoffman
Microsoft Research
May 22, 2024

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

SeminarNeuroscience

Modelling the fruit fly brain and body

Srinivas Turaga
HHMI | Janelia
May 15, 2024

Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.

SeminarNeuroscience

Generative models for video games

Katja Hoffman
Microsoft Research
May 1, 2024

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

SeminarNeuroscience

Modeling human brain development and disease: the role of primary cilia

Kyrousi Christina
Medical School, National and Kapodistrian University of Athens, Athens, Greece
Apr 24, 2024

Neurodevelopmental disorders (NDDs) impose a global burden, affecting an increasing number of individuals. While some causative genes have been identified, understanding the human-specific mechanisms involved in these disorders remains limited. Traditional gene-driven approaches for modeling brain diseases have failed to capture the diverse and convergent mechanisms at play. Centrosomes and cilia act as intermediaries between environmental and intrinsic signals, regulating cellular behavior. Mutations or dosage variations disrupting their function have been linked to brain formation deficits, highlighting their importance, yet their precise contributions remain largely unknown. Hence, we aim to investigate whether the centrosome/cilia axis is crucial for brain development and serves as a hub for human-specific mechanisms disrupted in NDDs. Towards this direction, we first demonstrated species-specific and cell-type-specific differences in the cilia-genes expression during mouse and human corticogenesis. Then, to dissect their role, we provoked their ectopic overexpression or silencing in the developing mouse cortex or in human brain organoids. Our findings suggest that cilia genes manipulation alters both the numbers and the position of NPCs and neurons in the developing cortex. Interestingly, primary cilium morphology is disrupted, as we find changes in their length, orientation and number that lead to disruption of the apical belt and altered delamination profiles during development. Our results give insight into the role of primary cilia in human cortical development and address fundamental questions regarding the diversity and convergence of gene function in development and disease manifestation. It has the potential to uncover novel pharmacological targets, facilitate personalized medicine, and improve the lives of individuals affected by NDDs through targeted cilia-based therapies.

SeminarNeuroscience

Modeling the fruit fly brain and body

Srinivas Turaga
HHMI Janelia Research Campus
Apr 18, 2024
SeminarNeuroscience

Learning representations of specifics and generalities over time

Anna Schapiro
University of Pennsylvania
Apr 12, 2024

There is a fundamental tension between storing discrete traces of individual experiences, which allows recall of particular moments in our past without interference, and extracting regularities across these experiences, which supports generalization and prediction in similar situations in the future. One influential proposal for how the brain resolves this tension is that it separates the processes anatomically into Complementary Learning Systems, with the hippocampus rapidly encoding individual episodes and the neocortex slowly extracting regularities over days, months, and years. But this does not explain our ability to learn and generalize from new regularities in our environment quickly, often within minutes. We have put forward a neural network model of the hippocampus that suggests that the hippocampus itself may contain complementary learning systems, with one pathway specializing in the rapid learning of regularities and a separate pathway handling the region’s classic episodic memory functions. This proposal has broad implications for how we learn and represent novel information of specific and generalized types, which we test across statistical learning, inference, and category learning paradigms. We also explore how this system interacts with slower-learning neocortical memory systems, with empirical and modeling investigations into how the hippocampus shapes neocortical representations during sleep. Together, the work helps us understand how structured information in our environment is initially encoded and how it then transforms over time.

SeminarNeuroscience

Modeling idiosyncratic evaluation of faces

Alexander Todorov
University of Chicago
Mar 26, 2024
SeminarNeuroscience

Modeling Primate Vision (and Language)

Martin Schrimpf
NeuroX, EPFL
Dec 6, 2023
SeminarNeuroscience

Modeling the Navigational Circuitry of the Fly

Larry Abbott
Columbia University
Dec 1, 2023

Navigation requires orienting oneself relative to landmarks in the environment, evaluating relevant sensory data, remembering goals, and convert all this information into motor commands that direct locomotion. I will present models, highly constrained by connectomic, physiological and behavioral data, for how these functions are accomplished in the fly brain.

SeminarNeuroscience

Bio-realistic multiscale modeling of cortical circuits

Anton Arkhipov
Allen Institute
Nov 24, 2023

A central question in neuroscience is how the structure of brain circuits determines their activity and function. To explore this systematically, we developed a 230,000-neuron model of mouse primary visual cortex (area V1). The model integrates a broad array of experimental data:Distribution and morpho-electric properties of different neuron types in V1.

SeminarNeuroscience

Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer

Junbeom Kwon
Nov 21, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916

SeminarArtificial IntelligenceRecording

Mathematical and computational modelling of ocular hemodynamics: from theory to applications

Giovanna Guidoboni
University of Maine
Nov 14, 2023

Changes in ocular hemodynamics may be indicative of pathological conditions in the eye (e.g. glaucoma, age-related macular degeneration), but also elsewhere in the body (e.g. systemic hypertension, diabetes, neurodegenerative disorders). Thanks to its transparent fluids and structures that allow the light to go through, the eye offers a unique window on the circulation from large to small vessels, and from arteries to veins. Deciphering the causes that lead to changes in ocular hemodynamics in a specific individual could help prevent vision loss as well as aid in the diagnosis and management of diseases beyond the eye. In this talk, we will discuss how mathematical and computational modelling can help in this regard. We will focus on two main factors, namely blood pressure (BP), which drives the blood flow through the vessels, and intraocular pressure (IOP), which compresses the vessels and may impede the flow. Mechanism-driven models translates fundamental principles of physics and physiology into computable equations that allow for identification of cause-to-effect relationships among interplaying factors (e.g. BP, IOP, blood flow). While invaluable for causality, mechanism-driven models are often based on simplifying assumptions to make them tractable for analysis and simulation; however, this often brings into question their relevance beyond theoretical explorations. Data-driven models offer a natural remedy to address these short-comings. Data-driven methods may be supervised (based on labelled training data) or unsupervised (clustering and other data analytics) and they include models based on statistics, machine learning, deep learning and neural networks. Data-driven models naturally thrive on large datasets, making them scalable to a plethora of applications. While invaluable for scalability, data-driven models are often perceived as black- boxes, as their outcomes are difficult to explain in terms of fundamental principles of physics and physiology and this limits the delivery of actionable insights. The combination of mechanism-driven and data-driven models allows us to harness the advantages of both, as mechanism-driven models excel at interpretability but suffer from a lack of scalability, while data-driven models are excellent at scale but suffer in terms of generalizability and insights for hypothesis generation. This combined, integrative approach represents the pillar of the interdisciplinary approach to data science that will be discussed in this talk, with application to ocular hemodynamics and specific examples in glaucoma research.

SeminarNeuroscienceRecording

Virtual Brain Twins for Brain Medicine and Epilepsy

Viktor Jirsa
Aix Marseille Université - Inserm
Nov 8, 2023

Over the past decade we have demonstrated that the fusion of subject-specific structural information of the human brain with mathematical dynamic models allows building biologically realistic brain network models, which have a predictive value, beyond the explanatory power of each approach independently. The network nodes hold neural population models, which are derived using mean field techniques from statistical physics expressing ensemble activity via collective variables. Our hybrid approach fuses data-driven with forward-modeling-based techniques and has been successfully applied to explain healthy brain function and clinical translation including aging, stroke and epilepsy. Here we illustrate the workflow along the example of epilepsy: we reconstruct personalized connectivity matrices of human epileptic patients using Diffusion Tensor weighted Imaging (DTI). Subsets of brain regions generating seizures in patients with refractory partial epilepsy are referred to as the epileptogenic zone (EZ). During a seizure, paroxysmal activity is not restricted to the EZ, but may recruit other healthy brain regions and propagate activity through large brain networks. The identification of the EZ is crucial for the success of neurosurgery and presents one of the historically difficult questions in clinical neuroscience. The application of latest techniques in Bayesian inference and model inversion, in particular Hamiltonian Monte Carlo, allows the estimation of the EZ, including estimates of confidence and diagnostics of performance of the inference. The example of epilepsy nicely underwrites the predictive value of personalized large-scale brain network models. The workflow of end-to-end modeling is an integral part of the European neuroinformatics platform EBRAINS and enables neuroscientists worldwide to build and estimate personalized virtual brains.

SeminarNeuroscience

Metabolic Remodelling in the Developing Forebrain in Health and Disease

Gaia Novarino
Institute of Science and Technology Austria
Oct 31, 2023

Little is known about the critical metabolic changes that neural cells have to undergo during development and how temporary shifts in this program can influence brain circuitries and behavior. Motivated by the identification of autism-associated mutations in SLC7A5, a transporter for metabolically essential large neutral amino acids (LNAAs), we utilized metabolomic profiling to investigate the metabolic states of the cerebral cortex across various developmental stages. Our findings reveal significant metabolic restructuring occurring in the forebrain throughout development, with specific groups of metabolites exhibiting stage-specific changes. Through the manipulation of Slc7a5 expression in neural cells, we discovered an interconnected relationship between the metabolism of LNAAs and lipids within the cortex. Neuronal deletion of Slc7a5 influences the postnatal metabolic state, resulting in a shift in lipid metabolism and a cell-type-specific modification in neuronal activity patterns. This ultimately gives rise to enduring circuit dysfunction.

SeminarNeuroscience

NII Methods (journal club): NeuroQuery, comprehensive meta-analysis of human brain mapping

Andy Jahn
fMRI Lab, University of Michigan
Sep 1, 2023

We will discuss this paper on Neuroquery, a relatively new web-based meta-analysis tool: https://elifesciences.org/articles/53385.pdf. This is different from Neurosynth in that it generates meta-analysis maps using predictive modeling from the string of text provided at the prompt, instead of performing inferential statistics to calculate the overlap of activation from different studies. This allows the user to generate predictive maps for more nuanced cognitive processes - especially for clinical populations which may be underrepresented in the literature compared to controls - and can be useful in generating predictions about where the activity will be for one's own study, and for creating ROIs.

SeminarNeuroscience

Microbial modulation of zebrafish behavior and brain development

Judith S. Eisen
University of Oregon
May 16, 2023

There is growing recognition that host-associated microbiotas modulate intrinsic neurodevelopmental programs including those underlying human social behavior. Despite this awareness, the fundamental processes are generally not understood. We discovered that the zebrafish microbiota is necessary for normal social behavior. By examining neuronal correlates of behavior, we found that the microbiota restrains neurite complexity and targeting of key forebrain neurons within the social behavior circuitry. The microbiota is also necessary for both localization and molecular functions of forebrain microglia, brain-resident phagocytes that remodel neuronal arbors. In particular, the microbiota promotes expression of complement signaling pathway components important for synapse remodeling. Our work provides evidence that the microbiota modulates zebrafish social behavior by stimulating microglial remodeling of forebrain circuits during early neurodevelopment and suggests molecular pathways for therapeutic interventions during atypical neurodevelopment.

SeminarNeuroscience

Off-policy learning in the basal ganglia

Ashok Litwin-Kumar
Columbia University, New York
May 3, 2023

I will discuss work with Jack Lindsey modeling reinforcement learning for action selection in the basal ganglia. I will argue that the presence of multiple brain regions, in addition to the basal ganglia, that contribute to motor control motivates the need for an off-policy basal ganglia learning algorithm. I will then describe a biological implementation of such an algorithm that predicts tuning of dopamine neurons to a quantity we call "action surprise," in addition to reward prediction error. In the same model, an implementation of learning from a motor efference copy also predicts a novel solution to the problem of multiplexing feedforward and efference-related striatal activity. The solution exploits the difference between D1 and D2-expressing medium spiny neurons and leads to predictions about striatal dynamics.

SeminarArtificial IntelligenceRecording

Computational models and experimental methods for the human cornea

Anna Pandolfi
Politecnico di Milano
May 2, 2023

The eye is a multi-component biological system, where mechanics, optics, transport phenomena and chemical reactions are strictly interlaced, characterized by the typical bio-variability in sizes and material properties. The eye’s response to external action is patient-specific and it can be predicted only by a customized approach, that accounts for the multiple physics and for the intrinsic microstructure of the tissues, developed with the aid of forefront means of computational biomechanics. Our activity in the last years has been devoted to the development of a comprehensive model of the cornea that aims at being entirely patient-specific. While the geometrical aspects are fully under control, given the sophisticated diagnostic machinery able to provide a fully three-dimensional images of the eye, the major difficulties are related to the characterization of the tissues, which require the setup of in-vivo tests to complement the well documented results of in-vitro tests. The interpretation of in-vivo tests is very complex, since the entire structure of the eye is involved and the characterization of the single tissue is not trivial. The availability of micromechanical models constructed from detailed images of the eye represents an important support for the characterization of the corneal tissues, especially in the case of pathologic conditions. In this presentation I will provide an overview of the research developed in our group in terms of computational models and experimental approaches developed for the human cornea.

SeminarPsychology

Face and voice perception as a tool for characterizing perceptual decisions and metacognitive abilities across the general population and psychosis spectrum

Léon Franzen
University of Luebeck
Apr 26, 2023

Humans constantly make perceptual decisions on human faces and voices. These regularly come with the challenge of receiving only uncertain sensory evidence, resulting from noisy input and noisy neural processes. Efficiently adapting one’s internal decision system including prior expectations and subsequent metacognitive assessments to these challenges is crucial in everyday life. However, the exact decision mechanisms and whether these represent modifiable states remain unknown in the general population and clinical patients with psychosis. Using data from a laboratory-based sample of healthy controls and patients with psychosis as well as a complementary, large online sample of healthy controls, I will demonstrate how a combination of perceptual face and voice recognition decision fidelity, metacognitive ratings, and Bayesian computational modelling may be used as indicators to differentiate between non-clinical and clinical states in the future.

SeminarNeuroscienceRecording

Assigning credit through the "other” connectome

Eric Shea-Brown
University of Washington, Seattle
Apr 19, 2023

Learning in neural networks requires assigning the right values to thousands to trillions or more of individual connections, so that the network as a whole produces the desired behavior. Neuroscientists have gained insights into this “credit assignment” problem through decades of experimental, modeling, and theoretical studies. This has suggested key roles for synaptic eligibility traces and top-down feedback signals, among other factors. Here we study the potential contribution of another type of signaling that is being revealed in greater and greater fidelity by ongoing molecular and genomics studies. This is the set of modulatory pathways local to a given circuit, which form an intriguing second type of connectome overlayed on top of synaptic connectivity. We will share ongoing modeling and theoretical work that explores the possible roles of this local modulatory connectome in network learning.

SeminarNeuroscienceRecording

From cells to systems: multiscale studies of the epileptic brain

Boris Bernhardt
Montreal Neurological Institute
Mar 29, 2023

It is increasingly recognized that epilepsy affects human brain organization across multiple scales, ranging from cellular alterations in specific regions towards macroscale network imbalances. My talk will overview an emerging paradigm that integrates cellular, neuroimaging, and network modelling approaches to faithful characterize the extent of structural and functional alterations in the common epilepsies. I will also discuss how multiscale framework can help to derive clinically useful biomarkers of dysfunction, and how these methods may guide surgical planning and prognostics.

SeminarArtificial IntelligenceRecording

Unique features of oxygen delivery to the mammalian retina

Robert Linsenmeier
Northwestern University
Feb 7, 2023

Like all neural tissue, the retina has a high metabolic demand, and requires a constant supply of oxygen. Second and third order neurons are supplied by the retinal circulation, whose characteristics are similar to brain circulation. However, the photoreceptor region, which occupies half of the retinal thickness, is avascular, and relies on diffusion of oxygen from the choroidal circulation, whose properties are very different, as well as the retinal circulation. By fitting diffusion models to oxygen measurements made with oxygen microelectrodes, it is possible to understand the relative roles of the two circulations under normal conditions of light and darkness, and what happens if the retina is detached or the retinal circulation is occluded. Most of this work has been done in vivo in rat, cat, and monkey, but recent work in the isolated mouse retina will also be discussed.

SeminarNeuroscienceRecording

Predictive modeling, cortical hierarchy, and their computational implications

Choong-Wan Woo & Seok-Jun Hong
Sungkyunkwan University
Jan 17, 2023

Predictive modeling and dimensionality reduction of functional neuroimaging data have provided rich information about the representations and functional architectures of the human brain. While these approaches have been effective in many cases, we will discuss how neglecting the internal dynamics of the brain (e.g., spontaneous activity, global dynamics, effective connectivity) and its underlying computational principles may hinder our progress in understanding and modeling brain functions. By reexamining evidence from our previous and ongoing work, we will propose new hypotheses and directions for research that consider both internal dynamics and the computational principles that may govern brain processes.

SeminarNeuroscience

Modeling shared and variable information encoded in fine-scale cortical topographies

James Haxby
Dartmouth College
Dec 13, 2022

Information is encoded in fine-scale functional topographies that vary from brain to brain. Hyperalignment models information that is shared across brain in a high-dimensional common information space. Hyperalignment transformations project idiosyncratic individual topographies into the common model information space. These transformations contain topographic basis functions, affording estimates of how shared information in the common model space is instantiated in the idiosyncratic functional topographies of individual brains. This new model of the functional organization of cortex – as multiplexed, overlapping basis functions – captures the idiosyncratic conformations of both coarse-scale topographies, such as retinotopy and category-selectivity, and fine-scale topographies. Hyperalignment also makes it possible to investigate how information that is encoded in fine-scale topographies differs across brains. These individual differences in fine-grained cortical function were not accessible with previous methods.

SeminarNeuroscienceRecording

Inflammation and Pregancy

Kenichiro Motomura & Nuriban Valero-Pacheco
Wayne State University and Rutgers University
Dec 8, 2022

Talk(1): Fetal and maternal NLRP3 signaling is required for preterm labor and birth. (DOI: 10.1172/jci.insight.158238) Talk(2): Maternal IL-33 critically regulates tissue remodeling and type 2 immune responses in the uterus during early pregnancy in mice (DOI: 10.1073/pnas.2123267119)

SeminarNeuroscienceRecording

Can a single neuron solve MNIST? Neural computation of machine learning tasks emerges from the interaction of dendritic properties

Ilenna Jones
University of Pennsylvania
Dec 7, 2022

Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how qualitative aspects of a dendritic tree, such as its branched morphology, its repetition of presynaptic inputs, voltage-gated ion channels, electrical properties and complex synapses, determine neural computation beyond this apparent nonlinearity. While it has been speculated that the dendritic tree of a neuron can be seen as a multi-layer neural network and it has been shown that such an architecture could be computationally strong, we do not know if that computational strength is preserved under these qualitative biological constraints. Here we simulate multi-layer neural network models of dendritic computation with and without these constraints. We find that dendritic model performance on interesting machine learning tasks is not hurt by most of these constraints and may synergistically benefit from all of them combined. Our results suggest that single real dendritic trees may be able to learn a surprisingly broad range of tasks through the emergent capabilities afforded by their properties.

SeminarNeuroscienceRecording

Network inference via process motifs for lagged correlation in linear stochastic processes

Alice Schwarze
Dartmouth College
Nov 18, 2022

A major challenge for causal inference from time-series data is the trade-off between computational feasibility and accuracy. Motivated by process motifs for lagged covariance in an autoregressive model with slow mean-reversion, we propose to infer networks of causal relations via pairwise edge measure (PEMs) that one can easily compute from lagged correlation matrices. Motivated by contributions of process motifs to covariance and lagged variance, we formulate two PEMs that correct for confounding factors and for reverse causation. To demonstrate the performance of our PEMs, we consider network interference from simulations of linear stochastic processes, and we show that our proposed PEMs can infer networks accurately and efficiently. Specifically, for slightly autocorrelated time-series data, our approach achieves accuracies higher than or similar to Granger causality, transfer entropy, and convergent crossmapping -- but with much shorter computation time than possible with any of these methods. Our fast and accurate PEMs are easy-to-implement methods for network inference with a clear theoretical underpinning. They provide promising alternatives to current paradigms for the inference of linear models from time-series data, including Granger causality, vector-autoregression, and sparse inverse covariance estimation.

SeminarNeuroscienceRecording

Beyond Biologically Plausible Spiking Networks for Neuromorphic Computing

A. Subramoney
University of Bochum
Nov 9, 2022

Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features – event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST.

SeminarNeuroscienceRecording

AI-assisted language learning: Assessing learners who memorize and reason by analogy

Pierre-Alexandre Murena
University of Helsinki
Oct 5, 2022

Vocabulary learning applications like Duolingo have millions of users around the world, but yet are based on very simple heuristics to choose teaching material to provide to their users. In this presentation, we will discuss the possibility to develop more advanced artificial teachers, which would be based on modeling of the learner’s inner characteristics. In the case of teaching vocabulary, understanding how the learner memorizes is enough. When it comes to picking grammar exercises, it becomes essential to assess how the learner reasons, in particular by analogy. This second application will illustrate how analogical and case-based reasoning can be employed in an alternative way in education: not as the teaching algorithm, but as a part of the learner’s model.

SeminarNeuroscienceRecording

Building System Models of Brain-Like Visual Intelligence with Brain-Score

Martin Schrimpf
MIT
Oct 5, 2022

Research in the brain and cognitive sciences attempts to uncover the neural mechanisms underlying intelligent behavior in domains such as vision. Due to the complexities of brain processing, studies necessarily had to start with a narrow scope of experimental investigation and computational modeling. I argue that it is time for our field to take the next step: build system models that capture a range of visual intelligence behaviors along with the underlying neural mechanisms. To make progress on system models, we propose integrative benchmarking – integrating experimental results from many laboratories into suites of benchmarks that guide and constrain those models at multiple stages and scales. We show-case this approach by developing Brain-Score benchmark suites for neural (spike rates) and behavioral experiments in the primate visual ventral stream. By systematically evaluating a wide variety of model candidates, we not only identify models beginning to match a range of brain data (~50% explained variance), but also discover that models’ brain scores are predicted by their object categorization performance (up to 70% ImageNet accuracy). Using the integrative benchmarks, we develop improved state-of-the-art system models that more closely match shallow recurrent neuroanatomy and early visual processing to predict primate temporal processing and become more robust, and require fewer supervised synaptic updates. Taken together, these integrative benchmarks and system models are first steps to modeling the complexities of brain processing in an entire domain of intelligence.

SeminarNeuroscienceRecording

General purpose event-based architectures for deep learning

Anand Subramoney
Institute for Neural Computation
Oct 5, 2022

Biologically plausible spiking neural networks (SNNs) are an emerging architecture for deep learning tasks due to their energy efficiency when implemented on neuromorphic hardware. However, many of the biological features are at best irrelevant and at worst counterproductive when evaluated in the context of task performance and suitability for neuromorphic hardware. In this talk, I will present an alternative paradigm to design deep learning architectures with good task performance in real-world benchmarks while maintaining all the advantages of SNNs. We do this by focusing on two main features -- event-based computation and activity sparsity. Starting from the performant gated recurrent unit (GRU) deep learning architecture, we modify it to make it event-based and activity-sparse. The resulting event-based GRU (EGRU) is extremely efficient for both training and inference. At the same time, it achieves performance close to conventional deep learning architectures in challenging tasks such as language modelling, gesture recognition and sequential MNIST

SeminarNeuroscience

Internally Organized Abstract Task Maps in the Mouse Medial Frontal Cortex

Mohamady El-Gaby
University of Oxford
Sep 28, 2022

New tasks are often similar in structure to old ones. Animals that take advantage of such conserved or “abstract” task structures can master new tasks with minimal training. To understand the neural basis of this abstraction, we developed a novel behavioural paradigm for mice: the “ABCD” task, and recorded from their medial frontal neurons as they learned. Animals learned multiple tasks where they had to visit 4 rewarded locations on a spatial maze in sequence, which defined a sequence of four “task states” (ABCD). Tasks shared the same circular transition structure (… ABCDABCD …) but differed in the spatial arrangement of rewards. As well as improving across tasks, mice inferred that A followed D (i.e. completed the loop) on the very first trial of a new task. This “zero-shot inference” is only possible if animals had learned the abstract structure of the task. Across tasks, individual medial Frontal Cortex (mFC) neurons maintained their tuning to the phase of an animal’s trajectory between rewards but not their tuning to task states, even in the absence of spatial tuning. Intriguingly, groups of mFC neurons formed modules of coherently remapping neurons that maintained their tuning relationships across tasks. Such tuning relationships were expressed as replay/preplay during sleep, consistent with an internal organisation of activity into multiple, task-matched ring attractors. Remarkably, these modules were anchored to spatial locations: neurons were tuned to specific task space “distances” from a particular spatial location. These newly discovered “Spatially Anchored Task clocks” (SATs), suggest a novel algorithm for solving abstraction tasks. Using computational modelling, we show that SATs can perform zero-shot inference on new tasks in the absence of plasticity and guide optimal policy in the absence of continual planning. These findings provide novel insights into the Frontal mechanisms mediating abstraction and flexible behaviour.

SeminarNeuroscienceRecording

Nonlinear neural network dynamics accounts for human confidence in a sequence of perceptual decisions

Kevin Berlemont
Wang Lab, NYU Center for Neural Science
Sep 21, 2022

Electrophysiological recordings during perceptual decision tasks in monkeys suggest that the degree of confidence in a decision is based on a simple neural signal produced by the neural decision process. Attractor neural networks provide an appropriate biophysical modeling framework, and account for the experimental results very well. However, it remains unclear whether attractor neural networks can account for confidence reports in humans. We present the results from an experiment in which participants are asked to perform an orientation discrimination task, followed by a confidence judgment. Here we show that an attractor neural network model quantitatively reproduces, for each participant, the relations between accuracy, response times and confidence. We show that the attractor neural network also accounts for confidence-specific sequential effects observed in the experiment (participants are faster on trials following high confidence trials), as well as non confidence-specific sequential effects. Remarkably, this is obtained as an inevitable outcome of the network dynamics, without any feedback specific to the previous decision (that would result in, e.g., a change in the model parameters before the onset of the next trial). Our results thus suggest that a metacognitive process such as confidence in one’s decision is linked to the intrinsically nonlinear dynamics of the decision-making neural network.

SeminarNeuroscience

Invariant neural subspaces maintained by feedback modulation

Laura Naumann
Bernstein Center for Computational Neuroscience, Berlin
Jul 14, 2022

This session is a double feature of the Cologne Theoretical Neuroscience Forum and the Institute of Neuroscience and Medicine (INM-6) Computational and Systems Neuroscience of the Jülich Research Center.

SeminarNeuroscience

Successes and failures of current AI as a model of visual cognition

Gabriel Kreiman
Harvard
Jul 6, 2022
SeminarNeuroscience

From Computation to Large-scale Neural Circuitry in Human Belief Updating

Tobias Donner
University Medical Center Hamburg-Eppendorf
Jun 29, 2022

Many decisions under uncertainty entail dynamic belief updating: multiple pieces of evidence informing about the state of the environment are accumulated across time to infer the environmental state, and choose a corresponding action. Traditionally, this process has been conceptualized as a linear and perfect (i.e., without loss) integration of sensory information along purely feedforward sensory-motor pathways. Yet, natural environments can undergo hidden changes in their state, which requires a non-linear accumulation of decision evidence that strikes a tradeoff between stability and flexibility in response to change. How this adaptive computation is implemented in the brain has remained unknown. In this talk, I will present an approach that my laboratory has developed to identify evidence accumulation signatures in human behavior and neural population activity (measured with magnetoencephalography, MEG), across a large number of cortical areas. Applying this approach to data recorded during visual evidence accumulation tasks with change-points, we find that behavior and neural activity in frontal and parietal regions involved in motor planning exhibit hallmarks signatures of adaptive evidence accumulation. The same signatures of adaptive behavior and neural activity emerge naturally from simulations of a biophysically detailed model of a recurrent cortical microcircuit. The MEG data further show that decision dynamics in parietal and frontal cortex are mirrored by a selective modulation of the state of early visual cortex. This state modulation is (i) specifically expressed in the alpha frequency-band, (ii) consistent with feedback of evolving belief states from frontal cortex, (iii) dependent on the environmental volatility, and (iv) amplified by pupil-linked arousal responses during evidence accumulation. Together, our findings link normative decision computations to recurrent cortical circuit dynamics and highlight the adaptive nature of decision-related long-range feedback processing in the brain.

SeminarNeuroscienceRecording

Drifting assemblies for persistent memory: Neuron transitions and unsupervised compensation

Raoul-Martin Memmesheimer
University of Bonn, Germany
Jun 29, 2022

Change is ubiquitous in living beings. In particular, the connectome and neural representations can change. Nevertheless behaviors and memories often persist over long times. In a standard model, associative memories are represented by assemblies of strongly interconnected neurons. For faithful storage these assemblies are assumed to consist of the same neurons over time. We propose a contrasting memory model with complete temporal remodeling of assemblies, based on experimentally observed changes of synapses and neural representations. The assemblies drift freely as noisy autonomous network activity or spontaneous synaptic turnover induce neuron exchange. The exchange can be described analytically by reduced, random walk models derived from spiking neural network dynamics or from first principles. The gradual exchange allows activity-dependent and homeostatic plasticity to conserve the representational structure and keep inputs, outputs and assemblies consistent. This leads to persistent memory. Our findings explain recent experimental results on temporal evolution of fear memory representations and suggest that memory systems need to be understood in their completeness as individual parts may constantly change.

SeminarNeuroscience

Feedforward and feedback processes in visual recognition

Thomas Serre
Brown University
Jun 22, 2022

Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching – and sometimes even surpassing – human accuracy on a variety of visual recognition tasks. In this talk, however, I will show that these neural networks and their recent extensions exhibit a limited ability to solve seemingly simple visual reasoning problems involving incremental grouping, similarity, and spatial relation judgments. Our group has developed a recurrent network model of classical and extra-classical receptive field circuits that is constrained by the anatomy and physiology of the visual cortex. The model was shown to account for diverse visual illusions providing computational evidence for a novel canonical circuit that is shared across visual modalities. I will show that this computational neuroscience model can be turned into a modern end-to-end trainable deep recurrent network architecture that addresses some of the shortcomings exhibited by state-of-the-art feedforward networks for solving complex visual reasoning tasks. This suggests that neuroscience may contribute powerful new ideas and approaches to computer science and artificial intelligence.

ePoster

Adolescent maturation of cortical excitation-inhibition balance based on individualized biophysical network modeling

Amin Saberi, Kevin Wischnewski, Kyesam Jung, Leon Lotter, H. Schaare, Tobias Banaschweski, Gareth Barker, Arun Bokde, Sylvane Desrivières, Herta Flor, Antoine Grigis, Hugh Garavan, Penny Gowland, Andreas Heinz, Rüdiger Brühl, Jean-Luc Martinot, Marie-Laure Paillère Martinot, Eric Artiges, Frauke Nees, Dimitri Papadopoulos Orfanos, Herve Lemaitre, Luise Poustka, Sarah Hohmann, Nathalie Holz, Christian Baeuchl, Michael Smolka, Nilakshi Vaidya, Henrik Walter, Robert Whelan, Gunther Schumann, Tomas Paus, Juergen Dukart, Boris Bernhardt, Oleksandr Popovych, Simon Eickhoff, Sofie Valk

Bernstein Conference 2024

ePoster

Mechanistic modeling of Drosophila neural population codes in natural social communication

Rich Pang,Christa Baker,Diego Pacheco,Jonathan Pillow,Mala Murthy

COSYNE 2022

ePoster

Mechanistic modeling of Drosophila neural population codes in natural social communication

Rich Pang,Christa Baker,Diego Pacheco,Jonathan Pillow,Mala Murthy

COSYNE 2022

ePoster

Modeling the orbitofrontal cortex function in navigation through an RL-RNN implementation

Carlos Wert Carvajal, Raunak Basu, Albert Miguel-Lopez, Hiroshi Ito, Tatjana Tchumatchenko

COSYNE 2023

ePoster

Hard to digest: The challenges of modeling the dynamics of the C. elegans pharynx

James Ferguson, Tim P Vogels

FENS Forum 2024

ePoster

Method for 3D quantitative analysis of enteric nervous system remodeling in mouse and human gut tissues

Arielle Planchette, Ivana Gantar, Yoseline Cabara, Jules Scholler, Aleksander Sobolewski, Stéphane Pagès, Michalina Gora

FENS Forum 2024

ePoster

PTCHD1 modulates cytoskeleton remodeling through regulation of Rac1-PAK signaling pathway, consistent with neurodevelopmental disorders phenotype

Dévina Ung, Sylviane Marouillat, Thibaut Laboute, Judith Halewa, Chloé Boisseau, Marie Vossels, Frédéric Laumonnier

FENS Forum 2024

ePoster

Controlled sampling of non-equilibrium brain dynamics: modeling and estimation from neuroimaging signals

Matthieu Gilson

Bernstein Conference 2024

ePoster

cuBNM: GPU-Accelerated Biophysical Network Modeling

Amin Saberi, Kevin Wischnewski, Kyesam Jung, Leonard Sasse, Felix Hoffstaedter, Oleksandr Popovych, Boris Bernhardt, Simon Eickhoff, Sofie Valk

Bernstein Conference 2024

ePoster

Deep generative networks as a computational approach for global non-linear control modeling in the nematode C. elegans

Doris Voina, Steven Brunton, Jose Kutz

Bernstein Conference 2024

ePoster

Deep inverse modeling reveals dynamic-dependent invariances in neural circuits mechanisms

Richard Gao, Michael Deistler, Auguste Schulz, Pedro Gonçalves, Jakob Macke

Bernstein Conference 2024

ePoster

A new framework for modeling innate capabilities in network with diverse types of spiking neurons: Probabilistic Skeleton

Christoph Stöckl, Dominik Lang, Alice Dauphin, Wolfgang Maass

Bernstein Conference 2024

ePoster

Modeling the autistic cerebellum: propagation of granule cells alteration through the granular layer microcircuit

Danilo Benozzo, Alessio Marta, Robin De Schepper, Martina Rizza, Stefano Masoli, Egidio D'Angelo, Claudia Casellato

Bernstein Conference 2024

ePoster

Modeling competitive memory encoding using a Hopfield network

Julia Pronoza, Sen Cheng

Bernstein Conference 2024

ePoster

Modeling spatial and temporal attractive and repulsive biases in perception

Stefan Glasauer, W. Medendorp, Michel-Ange Amorim

Bernstein Conference 2024

ePoster

Modeling HCN Channel-Mediated Modulation on Dendro-Somatic Electric Coupling in CA1 Pyramidal Cells

Marvin Marz

Bernstein Conference 2024

ePoster

Modeling Decision-Making in Trajectory Extrapolation Tasks: Comparing Random Sampling Model and Multi-Layer Perceptron Approaches

Olga Polezhaeva, Michel-Ange Amorim, Stefan Glasauer

Bernstein Conference 2024

ePoster

Modeling gait dynamics with switching non-linear dynamical systems

Heike Stein, Njiva Andrianarivelo, Clarisse Batifol, Jeremy Gabillet, Ali Jalil, Michael Graupner, N. Alex Cayco Gajic

Bernstein Conference 2024

ePoster

Quantitative modeling of the emergence of macroscopic grid-like representations

Ikhwan Bin Khalid, Eric Reifenstein, Naomi Auer, Lukas Kunz, Richard Kempter

Bernstein Conference 2024

ePoster

Rapid prototyping in spiking neural network modeling with NESTML and NEST Desktop

Sebastian Spreizer, Charl Linssen, Pooja Babu, Abigail Morrison, Markus Diesmann, Benjamin Weyers

Bernstein Conference 2024

ePoster

Task choice influences single-neuron tuning predictions in connectome-constrained modeling

Felix Pei, Janne Lappalainen, Srinivas Turaga, Jakob Macke

Bernstein Conference 2024

ePoster

Deep neural network modeling of a visually-guided social behavior

Benjamin Cowley,Adam Calhoun,Nivedita Rangarajan,Jonathan Pillow,Mala Murthy

COSYNE 2022

ePoster

Modeling the formation of the visual hierarchy

Mikail Khona,Sarthak Chandra,Talia Konkle,Ila R Fiete

COSYNE 2022

ePoster

Modeling Hippocampal Spatial Learning Through a Valence-based Interplay of Dopamine and Serotonin

Carlos Wert Carvajal,Claudia Clopath,Melissa Reneaux,Tatjana Tchumatchenko

COSYNE 2022

ePoster

Modeling the formation of the visual hierarchy

Mikail Khona,Sarthak Chandra,Talia Konkle,Ila R Fiete

COSYNE 2022

ePoster

Modeling Hippocampal Spatial Learning Through a Valence-based Interplay of Dopamine and Serotonin

Carlos Wert Carvajal,Claudia Clopath,Melissa Reneaux,Tatjana Tchumatchenko

COSYNE 2022

ePoster

Modeling multi-region neural communication during decision making with recurrent switching dynamical systems

Orren Karniol-Tambour,David Zoltowski,Lucas Pinto,Efthymia Diamanti,David W. Tank,Carlos D. Brody,Jonathan Pillow

COSYNE 2022

ePoster

Modeling multi-region neural communication during decision making with recurrent switching dynamical systems

Orren Karniol-Tambour,David Zoltowski,Lucas Pinto,Efthymia Diamanti,David W. Tank,Carlos D. Brody,Jonathan Pillow

COSYNE 2022

ePoster

Modeling and optimization for neuromodulation in spinal cord stimulation

Hongda Li,Yanan Sui

COSYNE 2022

ePoster

Modeling and optimization for neuromodulation in spinal cord stimulation

Hongda Li,Yanan Sui

COSYNE 2022

ePoster

Modeling tutor-directed dynamics in zebra finch song learning

Miles Martinez,Samuel Brudner,Richard Mooney,John Pearson

COSYNE 2022

ePoster

Modeling tutor-directed dynamics in zebra finch song learning

Miles Martinez,Samuel Brudner,Richard Mooney,John Pearson

COSYNE 2022

ePoster

Multiscale Hierarchical Modeling Framework For Fully Mapping a Social Interaction

Shruthi Ravindranath,Talmo Pereira,Junyu Li,Jonathan Pillow,Mala Murthy

COSYNE 2022

ePoster

Multiscale Hierarchical Modeling Framework For Fully Mapping a Social Interaction

Shruthi Ravindranath,Talmo Pereira,Junyu Li,Jonathan Pillow,Mala Murthy

COSYNE 2022

ePoster

Online neural modeling and Bayesian optimization for closed-loop adaptive experiments

Anne Draelos,Pranjal Gupta,Na Young Jun,Chaichontat Sriworarat,Matthew Loring,Maxim Nikitchenko,Eva Naumann,John Pearson

COSYNE 2022

ePoster

Online neural modeling and Bayesian optimization for closed-loop adaptive experiments

Anne Draelos,Pranjal Gupta,Na Young Jun,Chaichontat Sriworarat,Matthew Loring,Maxim Nikitchenko,Eva Naumann,John Pearson

COSYNE 2022

ePoster

Semi-supervised sequence modeling for improved behavior segmentation

Matt Whiteway,Anqi Wu,Mia Bramel,Kelly Buchanan,Catherine Chen,Neeli Mishra,Evan Schaffer,Andres Villegas,The International Brain Laboratory,Liam Paninski

COSYNE 2022

ePoster

Semi-supervised sequence modeling for improved behavior segmentation

Matt Whiteway,Anqi Wu,Mia Bramel,Kelly Buchanan,Catherine Chen,Neeli Mishra,Evan Schaffer,Andres Villegas,The International Brain Laboratory,Liam Paninski

COSYNE 2022

ePoster

“Attentional fingerprints” in conceptual space: Reliable, individuating patterns of visual attention revealed using natural language modeling

Caroline Robertson, Katherine Packard, Amanda Haskins

COSYNE 2023

ePoster

The cost of behavioral flexibility: a modeling study of reversal learning using a spiking neural network

Behnam Ghazinouri, Sen Cheng

Bernstein Conference 2024