TopicNeuro

computation

50 Seminars40 ePosters4 Conferences1 Position

Latest

PositionNeuroscience

Dr. Michele Insanally

University of Pittsburgh
Pittsburgh, PA
Jan 4, 2026

The Insanally Lab is hiring postdocs to study the neural basis of auditory perception and learning. We incorporate a wide range of techniques including behavioral paradigms, in vivo multi-region neural recordings, optogenetics, chemogenetics, fiber photometry, and novel computational methods. Our lab is super supportive, collaborative, and we take mentoring seriously! Located at Pitt, our lab is part of a large systems neuroscience community that includes CNBC and CMU. For inquiries, feel free to reach out to me here: mni@pitt.edu. To find out more about our work, visit Insanallylab.com

SeminarNeuroscience

Computational Mechanisms of Predictive Processing in Brains and Machines

Dr. Antonino Greco
Hertie Institute for Clinical Brain Research, Germany
Dec 10, 2025

Predictive processing offers a unifying view of neural computation, proposing that brains continuously anticipate sensory input and update internal models based on prediction errors. In this talk, I will present converging evidence for the computational mechanisms underlying this framework across human neuroscience and deep neural networks. I will begin with recent work showing that large-scale distributed prediction-error encoding in the human brain directly predicts how sensory representations reorganize through predictive learning. I will then turn to PredNet, a popular predictive coding inspired deep network that has been widely used to model real-world biological vision systems. Using dynamic stimuli generated with our Spatiotemporal Style Transfer algorithm, we demonstrate that PredNet relies primarily on low-level spatiotemporal structure and remains insensitive to high-level content, revealing limits in its generalization capacity. Finally, I will discuss new recurrent vision models that integrate top-down feedback connections with intrinsic neural variability, uncovering a dual mechanism for robust sensory coding in which neural variability decorrelates unit responses, while top-down feedback stabilizes network dynamics. Together, these results outline how prediction error signaling and top-down feedback pathways shape adaptive sensory processing in biological and artificial systems.

SeminarNeuroscience

Convergent large-scale network and local vulnerabilities underlie brain atrophy across Parkinson’s disease stages

Andrew Vo
Montreal Neurological Institute, McGill University
Nov 6, 2025
SeminarNeuroscience

AutoMIND: Deep inverse models for revealing neural circuit invariances

Richard Gao
Goethe University
Oct 2, 2025
SeminarNeuroscience

OpenNeuro FitLins GLM: An Accessible, Semi-Automated Pipeline for OpenNeuro Task fMRI Analysis

Michael Demidenko
Stanford University
Aug 1, 2025

In this talk, I will discuss the OpenNeuro Fitlins GLM package and provide an illustration of the analytic workflow. OpenNeuro FitLins GLM is a semi-automated pipeline that reduces barriers to analyzing task-based fMRI data from OpenNeuro's 600+ task datasets. Created for psychology, psychiatry and cognitive neuroscience researchers without extensive computational expertise, this tool automates what is largely a manual process and compilation of in-house scripts for data retrieval, validation, quality control, statistical modeling and reporting that, in some cases, may require weeks of effort. The workflow abides by open-science practices, enhancing reproducibility and incorporates community feedback for model improvement. The pipeline integrates BIDS-compliant datasets and fMRIPrep preprocessed derivatives, and dynamically creates BIDS Statistical Model specifications (with Fitlins) to perform common mass univariate [GLM] analyses. To enhance and standardize reporting, it generates comprehensive reports which includes design matrices, statistical maps and COBIDAS-aligned reporting that is fully reproducible from the model specifications and derivatives. OpenNeuro Fitlins GLM has been tested on over 30 datasets spanning 50+ unique fMRI tasks (e.g., working memory, social processing, emotion regulation, decision-making, motor paradigms), reducing analysis times from weeks to hours when using high-performance computers, thereby enabling researchers to conduct robust single-study, meta- and mega-analyses of task fMRI data with significantly improved accessibility, standardized reporting and reproducibility.

SeminarNeuroscience

Understanding reward-guided learning using large-scale datasets

Kim Stachenfeld
DeepMind, Columbia U
Jul 9, 2025

Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.

SeminarNeuroscience

Neurobiological constraints on learning: bug or feature?

Cian O’Donell
Ulster University
Jun 11, 2025

Understanding how brains learn requires bridging evidence across scales—from behaviour and neural circuits to cells, synapses, and molecules. In our work, we use computational modelling and data analysis to explore how the physical properties of neurons and neural circuits constrain learning. These include limits imposed by brain wiring, energy availability, molecular noise, and the 3D structure of dendritic spines. In this talk I will describe one such project testing if wiring motifs from fly brain connectomes can improve performance of reservoir computers, a type of recurrent neural network. The hope is that these insights into brain learning will lead to improved learning algorithms for artificial systems.

SeminarNeuroscience

Neural mechanisms of optimal performance

Luca Mazzucato
University of Oregon
May 23, 2025

When we attend a demanding task, our performance is poor at low arousal (when drowsy) or high arousal (when anxious), but we achieve optimal performance at intermediate arousal. This celebrated Yerkes-Dodson inverted-U law relating performance and arousal is colloquially referred to as being "in the zone." In this talk, I will elucidate the behavioral and neural mechanisms linking arousal and performance under the Yerkes-Dodson law in a mouse model. During decision-making tasks, mice express an array of discrete strategies, whereby the optimal strategy occurs at intermediate arousal, measured by pupil, consistent with the inverted-U law. Population recordings from the auditory cortex (A1) further revealed that sound encoding is optimal at intermediate arousal. To explain the computational principle underlying this inverted-U law, we modeled the A1 circuit as a spiking network with excitatory/inhibitory clusters, based on the observed functional clusters in A1. Arousal induced a transition from a multi-attractor (low arousal) to a single attractor phase (high arousal), and performance is optimized at the transition point. The model also predicts stimulus- and arousal-induced modulations of neural variability, which we confirmed in the data. Our theory suggests that a single unifying dynamical principle, phase transitions in metastable dynamics, underlies both the inverted-U law of optimal performance and state-dependent modulations of neural variability.

SeminarNeuroscience

Understanding reward-guided learning using large-scale datasets

Kim Stachenfeld
DeepMind, Columbia U
May 14, 2025

Understanding the neural mechanisms of reward-guided learning is a long-standing goal of computational neuroscience. Recent methodological innovations enable us to collect ever larger neural and behavioral datasets. This presents opportunities to achieve greater understanding of learning in the brain at scale, as well as methodological challenges. In the first part of the talk, I will discuss our recent insights into the mechanisms by which zebra finch songbirds learn to sing. Dopamine has been long thought to guide reward-based trial-and-error learning by encoding reward prediction errors. However, it is unknown whether the learning of natural behaviours, such as developmental vocal learning, occurs through dopamine-based reinforcement. Longitudinal recordings of dopamine and bird songs reveal that dopamine activity is indeed consistent with encoding a reward prediction error during naturalistic learning. In the second part of the talk, I will talk about recent work we are doing at DeepMind to develop tools for automatically discovering interpretable models of behavior directly from animal choice data. Our method, dubbed CogFunSearch, uses LLMs within an evolutionary search process in order to "discover" novel models in the form of Python programs that excel at accurately predicting animal behavior during reward-guided learning. The discovered programs reveal novel patterns of learning and choice behavior that update our understanding of how the brain solves reinforcement learning problems.

SeminarNeuroscience

Harnessing Big Data in Neuroscience: From Mapping Brain Connectivity to Predicting Traumatic Brain Injury

Franco Pestilli
University of Texas, Austin, USA
May 13, 2025

Neuroscience is experiencing unprecedented growth in dataset size both within individual brains and across populations. Large-scale, multimodal datasets are transforming our understanding of brain structure and function, creating opportunities to address previously unexplored questions. However, managing this increasing data volume requires new training and technology approaches. Modern data technologies are reshaping neuroscience by enabling researchers to tackle complex questions within a Ph.D. or postdoctoral timeframe. I will discuss cloud-based platforms such as brainlife.io, that provide scalable, reproducible, and accessible computational infrastructure. Modern data technology can democratize neuroscience, accelerate discovery and foster scientific transparency and collaboration. Concrete examples will illustrate how these technologies can be applied to mapping brain connectivity, studying human learning and development, and developing predictive models for traumatic brain injury (TBI). By integrating cloud computing and scalable data-sharing frameworks, neuroscience can become more impactful, inclusive, and data-driven..

SeminarNeuroscience

Simulating Thought Disorder: Fine-Tuning Llama-2 for Synthetic Speech in Schizophrenia

Alban Elias Voppel
McGill University
May 1, 2025
SeminarNeuroscience

Relating circuit dynamics to computation: robustness and dimension-specific computation in cortical dynamics

Shaul Druckmann
Stanford department of Neurobiology and department of Psychiatry and Behavioral Sciences
Apr 23, 2025

Neural dynamics represent the hard-to-interpret substrate of circuit computations. Advances in large-scale recordings have highlighted the sheer spatiotemporal complexity of circuit dynamics within and across circuits, portraying in detail the difficulty of interpreting such dynamics and relating it to computation. Indeed, even in extremely simplified experimental conditions, one observes high-dimensional temporal dynamics in the relevant circuits. This complexity can be potentially addressed by the notion that not all changes in population activity have equal meaning, i.e., a small change in the evolution of activity along a particular dimension may have a bigger effect on a given computation than a large change in another. We term such conditions dimension-specific computation. Considering motor preparatory activity in a delayed response task we utilized neural recordings performed simultaneously with optogenetic perturbations to probe circuit dynamics. First, we revealed a remarkable robustness in the detailed evolution of certain dimensions of the population activity, beyond what was thought to be the case experimentally and theoretically. Second, the robust dimension in activity space carries nearly all of the decodable behavioral information whereas other non-robust dimensions contained nearly no decodable information, as if the circuit was setup to make informative dimensions stiff, i.e., resistive to perturbations, leaving uninformative dimensions sloppy, i.e., sensitive to perturbations. Third, we show that this robustness can be achieved by a modular organization of circuitry, whereby modules whose dynamics normally evolve independently can correct each other’s dynamics when an individual module is perturbed, a common design feature in robust systems engineering. Finally, we will recent work extending this framework to understanding the neural dynamics underlying preparation of speech.

SeminarNeuroscienceRecording

Multisensory computations underlying flavor perception and food choice

Joost Maier
Wake Forest School of Medicine
Apr 17, 2025
ConferenceNeuroscience

COSYNE 2025

Montreal, Canada
Mar 27, 2025

The COSYNE 2025 conference was held in Montreal with post-conference workshops in Mont-Tremblant, continuing to provide a premier forum for computational and systems neuroscience. Attendees exchanged cutting-edge research in a single-track main meeting and in-depth specialized workshops, reflecting Cosyne’s mission to understand how neural systems function:contentReference[oaicite:6]{index=6}:contentReference[oaicite:7]{index=7}.

SeminarNeuroscience

Cognitive maps as expectations learned across episodes – a model of the two dentate gyrus blades

Andrej Bicanski
Max Planck Institute for Human Cognitive and Brain Sciences
Mar 12, 2025

How can the hippocampal system transition from episodic one-shot learning to a multi-shot learning regime and what is the utility of the resultant neural representations? This talk will explore the role of the dentate gyrus (DG) anatomy in this context. The canonical DG model suggests it performs pattern separation. More recent experimental results challenge this standard model, suggesting DG function is more complex and also supports the precise binding of objects and events to space and the integration of information across episodes. Very recent studies attribute pattern separation and pattern integration to anatomically distinct parts of the DG (the suprapyramidal blade vs the infrapyramidal blade). We propose a computational model that investigates this distinction. In the model the two processing streams (potentially localized in separate blades) contribute to the storage of distinct episodic memories, and the integration of information across episodes, respectively. The latter forms generalized expectations across episodes, eventually forming a cognitive map. We train the model with two data sets, MNIST and plausible entorhinal cortex inputs. The comparison between the two streams allows for the calculation of a prediction error, which can drive the storage of poorly predicted memories and the forgetting of well-predicted memories. We suggest that differential processing across the DG aids in the iterative construction of spatial cognitive maps to serve the generation of location-dependent expectations, while at the same time preserving episodic memory traces of idiosyncratic events.

SeminarNeuroscienceRecording

Brain Emulation Challenge Workshop

Randal A. Koene
Co-Founder and Chief Science Officer, Carboncopies
Feb 21, 2025

Brain Emulation Challenge workshop will tackle cutting-edge topics such as ground-truthing for validation, leveraging artificial datasets generated from virtual brain tissue, and the transformative potential of virtual brain platforms, such as applied to the forthcoming Brain Emulation Challenge.

SeminarNeuroscience

Memory formation in hippocampal microcircuit

Andreakos Nikolaos
Visiting Scientist, School of Computer Science, University of Lincoln, Scientific Associate, National and Kapodistrian University of Athens
Feb 7, 2025

The centre of memory is the medial temporal lobe (MTL) and especially the hippocampus. In our research, a more flexible brain-inspired computational microcircuit of the CA1 region of the mammalian hippocampus was upgraded and used to examine how information retrieval could be affected under different conditions. Six models (1-6) were created by modulating different excitatory and inhibitory pathways. The results showed that the increase in the strength of the feedforward excitation was the most effective way to recall memories. In other words, that allows the system to access stored memories more accurately.

SeminarNeuroscience

Predicting traveling waves: a new mathematical technique to link the structure of a network to the specific patterns of neural activity

Roberto Budzinski
Western University
Feb 6, 2025
SeminarNeuroscience

Contentopic mapping and object dimensionality - a novel understanding on the organization of object knowledge

Jorge Almeida
University of Coimbra
Jan 28, 2025

Our ability to recognize an object amongst many others is one of the most important features of the human mind. However, object recognition requires tremendous computational effort, as we need to solve a complex and recursive environment with ease and proficiency. This challenging feat is dependent on the implementation of an effective organization of knowledge in the brain. Here I put forth a novel understanding of how object knowledge is organized in the brain, by proposing that the organization of object knowledge follows key object-related dimensions, analogously to how sensory information is organized in the brain. Moreover, I will also put forth that this knowledge is topographically laid out in the cortical surface according to these object-related dimensions that code for different types of representational content – I call this contentopic mapping. I will show a combination of fMRI and behavioral data to support these hypotheses and present a principled way to explore the multidimensionality of object processing.

SeminarNeuroscience

Screen Savers : Protecting adolescent mental health in a digital world

Amy Orben
University of Cambridge UK
Dec 3, 2024

In our rapidly evolving digital world, there is increasing concern about the impact of digital technologies and social media on the mental health of young people. Policymakers and the public are nervous. Psychologists are facing mounting pressures to deliver evidence that can inform policies and practices to safeguard both young people and society at large. However, research progress is slow while technological change is accelerating.My talk will reflect on this, both as a question of psychological science and metascience. Digital companies have designed highly popular environments that differ in important ways from traditional offline spaces. By revisiting the foundations of psychology (e.g. development and cognition) and considering digital changes' impact on theories and findings, we gain deeper insights into questions such as the following. (1) How do digital environments exacerbate developmental vulnerabilities that predispose young people to mental health conditions? (2) How do digital designs interact with cognitive and learning processes, formalised through computational approaches such as reinforcement learning or Bayesian modelling?However, we also need to face deeper questions about what it means to do science about new technologies and the challenge of keeping pace with technological advancements. Therefore, I discuss the concept of ‘fast science’, where, during crises, scientists might lower their standards of evidence to come to conclusions quicker. Might psychologists want to take this approach in the face of technological change and looming concerns? The talk concludes with a discussion of such strategies for 21st-century psychology research in the era of digitalization.

SeminarNeuroscience

The Brain Prize winners' webinar

Larry Abbott, Haim Sompolinsky, Terry Sejnowski
Columbia University; Harvard University / Hebrew University; Salk Institute
Nov 30, 2024

This webinar brings together three leaders in theoretical and computational neuroscience—Larry Abbott, Haim Sompolinsky, and Terry Sejnowski—to discuss how neural circuits generate fundamental aspects of the mind. Abbott illustrates mechanisms in electric fish that differentiate self-generated electric signals from external sensory cues, showing how predictive plasticity and two-stage signal cancellation mediate a sense of self. Sompolinsky explores attractor networks, revealing how discrete and continuous attractors can stabilize activity patterns, enable working memory, and incorporate chaotic dynamics underlying spontaneous behaviors. He further highlights the concept of object manifolds in high-level sensory representations and raises open questions on integrating connectomics with theoretical frameworks. Sejnowski bridges these motifs with modern artificial intelligence, demonstrating how large-scale neural networks capture language structures through distributed representations that parallel biological coding. Together, their presentations emphasize the synergy between empirical data, computational modeling, and connectomics in explaining the neural basis of cognition—offering insights into perception, memory, language, and the emergence of mind-like processes.

SeminarNeuroscience

Sensory cognition

SueYeon Chung, Srini Turaga
New York University; Janelia Research Campus
Nov 29, 2024

This webinar features presentations from SueYeon Chung (New York University) and Srinivas Turaga (HHMI Janelia Research Campus) on theoretical and computational approaches to sensory cognition. Chung introduced a “neural manifold” framework to capture how high-dimensional neural activity is structured into meaningful manifolds reflecting object representations. She demonstrated that manifold geometry—shaped by radius, dimensionality, and correlations—directly governs a population’s capacity for classifying or separating stimuli under nuisance variations. Applying these ideas as a data analysis tool, she showed how measuring object-manifold geometry can explain transformations along the ventral visual stream and suggested that manifold principles also yield better self-supervised neural network models resembling mammalian visual cortex. Turaga described simulating the entire fruit fly visual pathway using its connectome, modeling 64 key cell types in the optic lobe. His team’s systematic approach—combining sparse connectivity from electron microscopy with simple dynamical parameters—recapitulated known motion-selective responses and produced novel testable predictions. Together, these studies underscore the power of combining connectomic detail, task objectives, and geometric theories to unravel neural computations bridging from stimuli to cognitive functions.

SeminarNeuroscience

Learning and Memory

Nicolas Brunel, Ashok Litwin-Kumar, Julijana Gjeorgieva
Duke University; Columbia University; Technical University Munich
Nov 29, 2024

This webinar on learning and memory features three experts—Nicolas Brunel, Ashok Litwin-Kumar, and Julijana Gjorgieva—who present theoretical and computational approaches to understanding how neural circuits acquire and store information across different scales. Brunel discusses calcium-based plasticity and how standard “Hebbian-like” plasticity rules inferred from in vitro or in vivo datasets constrain synaptic dynamics, aligning with classical observations (e.g., STDP) and explaining how synaptic connectivity shapes memory. Litwin-Kumar explores insights from the fruit fly connectome, emphasizing how the mushroom body—a key site for associative learning—implements a high-dimensional, random representation of sensory features. Convergent dopaminergic inputs gate plasticity, reflecting a high-dimensional “critic” that refines behavior. Feedback loops within the mushroom body further reveal sophisticated interactions between learning signals and action selection. Gjorgieva examines how activity-dependent plasticity rules shape circuitry from the subcellular (e.g., synaptic clustering on dendrites) to the cortical network level. She demonstrates how spontaneous activity during development, Hebbian competition, and inhibitory-excitatory balance collectively establish connectivity motifs responsible for key computations such as response normalization.

SeminarNeuroscience

Decision and Behavior

Sam Gershman, Jonathan Pillow, Kenji Doya
Harvard University; Princeton University; Okinawa Institute of Science and Technology
Nov 29, 2024

This webinar addressed computational perspectives on how animals and humans make decisions, spanning normative, descriptive, and mechanistic models. Sam Gershman (Harvard) presented a capacity-limited reinforcement learning framework in which policies are compressed under an information bottleneck constraint. This approach predicts pervasive perseveration, stimulus‐independent “default” actions, and trade-offs between complexity and reward. Such policy compression reconciles observed action stochasticity and response time patterns with an optimal balance between learning capacity and performance. Jonathan Pillow (Princeton) discussed flexible descriptive models for tracking time-varying policies in animals. He introduced dynamic Generalized Linear Models (Sidetrack) and hidden Markov models (GLM-HMMs) that capture day-to-day and trial-to-trial fluctuations in choice behavior, including abrupt switches between “engaged” and “disengaged” states. These models provide new insights into how animals’ strategies evolve under learning. Finally, Kenji Doya (OIST) highlighted the importance of unifying reinforcement learning with Bayesian inference, exploring how cortical-basal ganglia networks might implement model-based and model-free strategies. He also described Japan’s Brain/MINDS 2.0 and Digital Brain initiatives, aiming to integrate multimodal data and computational principles into cohesive “digital brains.”

SeminarNeuroscience

LLMs and Human Language Processing

Maryia Toneva, Ariel Goldstein, Jean-Remi King
Max Planck Institute of Software Systems; Hebrew University; École Normale Supérieure
Nov 29, 2024

This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.

SeminarNeuroscience

Understanding the complex behaviors of the ‘simple’ cerebellar circuit

Megan Carey
The Champalimaud Center for the Unknown, Lisbon, Portugal
Nov 14, 2024

Every movement we make requires us to precisely coordinate muscle activity across our body in space and time. In this talk I will describe our efforts to understand how the brain generates flexible, coordinated movement. We have taken a behavior-centric approach to this problem, starting with the development of quantitative frameworks for mouse locomotion (LocoMouse; Machado et al., eLife 2015, 2020) and locomotor learning, in which mice adapt their locomotor symmetry in response to environmental perturbations (Darmohray et al., Neuron 2019). Combined with genetic circuit dissection, these studies reveal specific, cerebellum-dependent features of these complex, whole-body behaviors. This provides a key entry point for understanding how neural computations within the highly stereotyped cerebellar circuit support the precise coordination of muscle activity in space and time. Finally, I will present recent unpublished data that provide surprising insights into how cerebellar circuits flexibly coordinate whole-body movements in dynamic environments.

SeminarNeuroscience

Contribution of computational models of reinforcement learning to neurosciences/ computational modeling, reward, learning, decision-making, conditioning, navigation, dopamine, basal ganglia, prefrontal cortex, hippocampus

Khamasi Mehdi
Centre National de la Recherche Scientifique / Sorbonne University
Nov 8, 2024
SeminarNeuroscience

Use case determines the validity of neural systems comparisons

Erin Grant
Gatsby Computational Neuroscience Unit & Sainsbury Wellcome Centre at University College London
Oct 16, 2024

Deep learning provides new data-driven tools to relate neural activity to perception and cognition, aiding scientists in developing theories of neural computation that increasingly resemble biological systems both at the level of behavior and of neural activity. But what in a deep neural network should correspond to what in a biological system? This question is addressed implicitly in the use of comparison measures that relate specific neural or behavioral dimensions via a particular functional form. However, distinct comparison methodologies can give conflicting results in recovering even a known ground-truth model in an idealized setting, leaving open the question of what to conclude from the outcome of a systems comparison using any given methodology. Here, we develop a framework to make explicit and quantitative the effect of both hypothesis-driven aspects—such as details of the architecture of a deep neural network—as well as methodological choices in a systems comparison setting. We demonstrate via the learning dynamics of deep neural networks that, while the role of the comparison methodology is often de-emphasized relative to hypothesis-driven aspects, this choice can impact and even invert the conclusions to be drawn from a comparison between neural systems. We provide evidence that the right way to adjudicate a comparison depends on the use case—the scientific hypothesis under investigation—which could range from identifying single-neuron or circuit-level correspondences to capturing generalizability to new stimulus properties

SeminarNeuroscience

Localisation of Seizure Onset Zone in Epilepsy Using Time Series Analysis of Intracranial Data

Hamid Karimi-Rouzbahani
The University of Queensland
Oct 11, 2024

There are over 30 million people with drug-resistant epilepsy worldwide. When neuroimaging and non-invasive neural recordings fail to localise seizure onset zones (SOZ), intracranial recordings become the best chance for localisation and seizure-freedom in those patients. However, intracranial neural activities remain hard to visually discriminate across recording channels, which limits the success of intracranial visual investigations. In this presentation, I present methods which quantify intracranial neural time series and combine them with explainable machine learning algorithms to localise the SOZ in the epileptic brain. I present the potentials and limitations of our methods in the localisation of SOZ in epilepsy providing insights for future research in this area.

ConferenceNeuroscience

Bernstein Conference 2024

Goethe University, Frankfurt, Germany
Sep 29, 2024

Each year the Bernstein Network invites the international computational neuroscience community to the annual Bernstein Conference for intensive scientific exchange:contentReference[oaicite:8]{index=8}. Bernstein Conference 2024, held in Frankfurt am Main, featured discussions, keynote lectures, and poster sessions, and has established itself as one of the most renowned conferences worldwide in this field:contentReference[oaicite:9]{index=9}:contentReference[oaicite:10]{index=10}.

SeminarNeuroscienceRecording

Prosocial Learning and Motivation across the Lifespan

Patricia Lockwood
University of Birmingham, UK
Sep 10, 2024

2024 BACN Early-Career Prize Lecture Many of our decisions affect other people. Our choices can decelerate climate change, stop the spread of infectious diseases, and directly help or harm others. Prosocial behaviours – decisions that help others – could contribute to reducing the impact of these challenges, yet their computational and neural mechanisms remain poorly understood. I will present recent work that examines prosocial motivation, how willing we are to incur costs to help others, prosocial learning, how we learn from the outcomes of our choices when they affect other people, and prosocial preferences, our self-reports of helping others. Throughout the talk, I will outline the possible computational and neural bases of these behaviours, and how they may differ from young adulthood to old age.

SeminarNeuroscience

Updating our models of the basal ganglia using advances in neuroanatomy and computational modeling

Mac Shine
University of Sydney
May 29, 2024
SeminarNeuroscience

Modelling the fruit fly brain and body

Srinivas Turaga
HHMI | Janelia
May 15, 2024

Through recent advances in microscopy, we now have an unprecedented view of the brain and body of the fruit fly Drosophila melanogaster. We now know the connectivity at single neuron resolution across the whole brain. How do we translate these new measurements into a deeper understanding of how the brain processes sensory information and produces behavior? I will describe two computational efforts to model the brain and the body of the fruit fly. First, I will describe a new modeling method which makes highly accurate predictions of neural activity in the fly visual system as measured in the living brain, using only measurements of its connectivity from a dead brain [1], joint work with Jakob Macke. Second, I will describe a whole body physics simulation of the fruit fly which can accurately reproduce its locomotion behaviors, both flight and walking [2], joint work with Google DeepMind.

SeminarNeuroscience

Stability of visual processing in passive and active vision

Tobias Rose
Institute of Experimental Epileptology and Cognition Research University of Bonn Medical Center
Mar 28, 2024

The visual system faces a dual challenge. On the one hand, features of the natural visual environment should be stably processed - irrespective of ongoing wiring changes, representational drift, and behavior. On the other hand, eye, head, and body motion require a robust integration of pose and gaze shifts in visual computations for a stable perception of the world. We address these dimensions of stable visual processing by studying the circuit mechanism of long-term representational stability, focusing on the role of plasticity, network structure, experience, and behavioral state while recording large-scale neuronal activity with miniature two-photon microscopy.

SeminarNeuroscienceRecording

Predictive processing: a circuit approach to psychosis

Georg Keller
Friedrich Miescher Institute for Biomedical Research, Basel
Mar 14, 2024

Predictive processing is a computational framework that aims to explain how the brain processes sensory information by making predictions about the environment and minimizing prediction errors. It can also be used to explain some of the key symptoms of psychotic disorders such as schizophrenia. In my talk, I will provide an overview of our progress in this endeavor.

SeminarNeuroscience

Unifying the mechanisms of hippocampal episodic memory and prefrontal working memory

James Whittington
Stanford University / University of Oxford
Feb 14, 2024

Remembering events in the past is crucial to intelligent behaviour. Flexible memory retrieval, beyond simple recall, requires a model of how events relate to one another. Two key brain systems are implicated in this process: the hippocampal episodic memory (EM) system and the prefrontal working memory (WM) system. While an understanding of the hippocampal system, from computation to algorithm and representation, is emerging, less is understood about how the prefrontal WM system can give rise to flexible computations beyond simple memory retrieval, and even less is understood about how the two systems relate to each other. Here we develop a mathematical theory relating the algorithms and representations of EM and WM by showing a duality between storing memories in synapses versus neural activity. In doing so, we develop a formal theory of the algorithm and representation of prefrontal WM as structured, and controllable, neural subspaces (termed activity slots). By building models using this formalism, we elucidate the differences, similarities, and trade-offs between the hippocampal and prefrontal algorithms. Lastly, we show that several prefrontal representations in tasks ranging from list learning to cue dependent recall are unified as controllable activity slots. Our results unify frontal and temporal representations of memory, and offer a new basis for understanding the prefrontal representation of WM

SeminarNeuroscienceRecording

Reimagining the neuron as a controller: A novel model for Neuroscience and AI

Dmitri 'Mitya' Chklovskii
Flatiron Institute, Center for Computational Neuroscience
Feb 5, 2024

We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.

SeminarNeuroscience

Neuromodulation of striatal D1 cells shapes BOLD fluctuations in anatomically connected thalamic and cortical regions

Marija Markicevic
Yale
Jan 19, 2024

Understanding how macroscale brain dynamics are shaped by microscale mechanisms is crucial in neuroscience. We investigate this relationship in animal models by directly manipulating cellular properties and measuring whole-brain responses using resting-state fMRI. Specifically, we explore the impact of chemogenetically neuromodulating D1 medium spiny neurons in the dorsomedial caudate putamen (CPdm) on BOLD dynamics within a striato-thalamo-cortical circuit in mice. Our findings indicate that CPdm neuromodulation alters BOLD dynamics in thalamic subregions projecting to the dorsomedial striatum, influencing both local and inter-regional connectivity in cortical areas. This study contributes to understanding structure–function relationships in shaping inter-regional communication between subcortical and cortical levels.

SeminarNeuroscienceRecording

Tracking subjects' strategies in behavioural choice experiments at trial resolution

Mark Humphries
University of Nottingham
Dec 7, 2023

Psychology and neuroscience are increasingly looking to fine-grained analyses of decision-making behaviour, seeking to characterise not just the variation between subjects but also a subject's variability across time. When analysing the behaviour of each subject in a choice task, we ideally want to know not only when the subject has learnt the correct choice rule but also what the subject tried while learning. I introduce a simple but effective Bayesian approach to inferring the probability of different choice strategies at trial resolution. This can be used both for inferring when subjects learn, by tracking the probability of the strategy matching the target rule, and for inferring subjects use of exploratory strategies during learning. Applied to data from rodent and human decision tasks, we find learning occurs earlier and more often than estimated using classical approaches. Around both learning and changes in the rewarded rules the exploratory strategies of win-stay and lose-shift, often considered complementary, are consistently used independently. Indeed, we find the use of lose-shift is strong evidence that animals have latently learnt the salient features of a new rewarded rule. Our approach can be extended to any discrete choice strategy, and its low computational cost is ideally suited for real-time analysis and closed-loop control.

SeminarNeuroscience

Connectome-based models of neurodegenerative disease

Jacob Vogel
Lund University
Dec 6, 2023

Neurodegenerative diseases involve accumulation of aberrant proteins in the brain, leading to brain damage and progressive cognitive and behavioral dysfunction. Many gaps exist in our understanding of how these diseases initiate and how they progress through the brain. However, evidence has accumulated supporting the hypothesis that aberrant proteins can be transported using the brain’s intrinsic network architecture — in other words, using the brain’s natural communication pathways. This theory forms the basis of connectome-based computational models, which combine real human data and theoretical disease mechanisms to simulate the progression of neurodegenerative diseases through the brain. In this talk, I will first review work leading to the development of connectome-based models, and work from my lab and others that have used these models to test hypothetical modes of disease progression. Second, I will discuss the future and potential of connectome-based models to achieve clinically useful individual-level predictions, as well as to generate novel biological insights into disease progression. Along the way, I will highlight recent work by my lab and others that is already moving the needle toward these lofty goals.

SeminarNeuroscience

Modeling the Navigational Circuitry of the Fly

Larry Abbott
Columbia University
Dec 1, 2023

Navigation requires orienting oneself relative to landmarks in the environment, evaluating relevant sensory data, remembering goals, and convert all this information into motor commands that direct locomotion. I will present models, highly constrained by connectomic, physiological and behavioral data, for how these functions are accomplished in the fly brain.

SeminarNeuroscience

Bio-realistic multiscale modeling of cortical circuits

Anton Arkhipov
Allen Institute
Nov 24, 2023

A central question in neuroscience is how the structure of brain circuits determines their activity and function. To explore this systematically, we developed a 230,000-neuron model of mouse primary visual cortex (area V1). The model integrates a broad array of experimental data:Distribution and morpho-electric properties of different neuron types in V1.

SeminarNeuroscience

Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer

Junbeom Kwon
Nov 21, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916

SeminarNeuroscience

Movements and engagement during decision-making

Anne Churchland
University of California Los Angeles, USA
Nov 8, 2023

When experts are immersed in a task, a natural assumption is that their brains prioritize task-related activity. Accordingly, most efforts to understand neural activity during well-learned tasks focus on cognitive computations and task-related movements. Surprisingly, we observed that during decision-making, the cortex-wide activity of multiple cell types is dominated by movements, especially “uninstructed movements”, that are spontaneously expressed. These observations argue that animals execute expert decisions while performing richly varied, uninstructed movements that profoundly shape neural activity. To understand the relationship between these movements and decision-making, we examined the movements more closely. We tested whether the magnitude or the timing of the movements was correlated with decision-making performance. To do this, we partitioned movements into two groups: task-aligned movements that were well predicted by task events (such as the onset of the sensory stimulus or choice) and task independent movement (TIM) that occurred independently of task events. TIM had a reliable, inverse correlation with performance in head-restrained mice and freely moving rats. This hinted that the timing of spontaneous movements could indicate periods of disengagement. To confirm this, we compared TIM to the latent behavioral states recovered by a hidden Markov model with Bernoulli generalized linear model observations (GLM-HMM) and found these, again, to be inversely correlated. Finally, we examined the impact of these behavioral states on neural activity. Surprisingly, we found that the same movement impacts neural activity more strongly when animals are disengaged. An intriguing possibility is that these larger movement signals disrupt cognitive computations, leading to poor decision-making performance. Taken together, these observations argue that movements and cognitionare closely intertwined, even during expert decision-making.

SeminarNeuroscienceRecording

How fly neurons compute the direction of visual motion

Axel Borst
Max-Planck-Institute for Biological Intelligence
Oct 9, 2023

Detecting the direction of image motion is important for visual navigation, predator avoidance and prey capture, and thus essential for the survival of all animals that have eyes. However, the direction of motion is not explicitly represented at the level of the photoreceptors: it rather needs to be computed by subsequent neural circuits, involving a comparison of the signals from neighboring photoreceptors over time. The exact nature of this process represents a classic example of neural computation and has been a longstanding question in the field. Much progress has been made in recent years in the fruit fly Drosophila melanogaster by genetically targeting individual neuron types to block, activate or record from them. Our results obtained this way demonstrate that the local direction of motion is computed in two parallel ON and OFF pathways. Within each pathway, a retinotopic array of four direction-selective T4 (ON) and T5 (OFF) cells represents the four Cartesian components of local motion vectors (leftward, rightward, upward, downward). Since none of the presynaptic neurons is directionally selective, direction selectivity first emerges within T4 and T5 cells. Our present research focuses on the cellular and biophysical mechanisms by which the direction of image motion is computed in these neurons.

SeminarNeuroscienceRecording

Diffuse coupling in the brain - A temperature dial for computation

Eli Müller
The University of Sydney
Oct 6, 2023

The neurobiological mechanisms of arousal and anesthesia remain poorly understood. Recent evidence highlights the key role of interactions between the cerebral cortex and the diffusely projecting matrix thalamic nuclei. Here, we interrogate these processes in a whole-brain corticothalamic neural mass model endowed with targeted and diffusely projecting thalamocortical nuclei inferred from empirical data. This model captures key features seen in propofol anesthesia, including diminished network integration, lowered state diversity, impaired susceptibility to perturbation, and decreased corticocortical coherence. Collectively, these signatures reflect a suppression of information transfer across the cerebral cortex. We recover these signatures of conscious arousal by selectively stimulating the matrix thalamus, recapitulating empirical results in macaque, as well as wake-like information processing states that reflect the thalamic modulation of largescale cortical attractor dynamics. Our results highlight the role of matrix thalamocortical projections in shaping many features of complex cortical dynamics to facilitate the unique communication states supporting conscious awareness.

SeminarNeuroscience

Brain Connectivity Workshop

Ed Bullmore, Jianfeng Feng, Viktor Jirsa, Helen Mayberg, Pedro Valdes-Sosa
Sep 20, 2023

Founded in 2002, the Brain Connectivity Workshop (BCW) is an annual international meeting for in-depth discussions of all aspects of brain connectivity research. By bringing together experts in computational neuroscience, neuroscience methodology and experimental neuroscience, it aims to improve the understanding of the relationship between anatomical connectivity, brain dynamics and cognitive function. These workshops have a unique format, featuring only short presentations followed by intense discussion. This year’s workshop is co-organised by Wellcome, putting the spotlight on brain connectivity in mental health disorders. We look forward to having you join us for this exciting, thought-provoking and inclusive event.

SeminarNeuroscience

NeuroAI from model to understanding: revealing the emergence of computations from the collective dynamics of interacting neurons

Surya Ganguli
Stanford University
Sep 13, 2023
SeminarNeuroscienceRecording

Social and non-social learning: Common, or specialised, mechanisms? (BACN Early Career Prize Lecture 2022)

Jennifer Cook
University of Birmingham, UK
Sep 12, 2023

The last decade has seen a burgeoning interest in studying the neural and computational mechanisms that underpin social learning (learning from others). Many findings support the view that learning from other people is underpinned by the same, ‘domain-general’, mechanisms underpinning learning from non-social stimuli. Despite this, the idea that humans possess social-specific learning mechanisms - adaptive specializations moulded by natural selection to cope with the pressures of group living - persists. In this talk I explore the persistence of this idea. First, I present dissociations between social and non-social learning - patterns of data which are difficult to explain under the domain-general thesis and which therefore support the idea that we have evolved special mechanisms for social learning. Subsequently, I argue that most studies that have dissociated social and non-social learning have employed paradigms in which social information comprises a secondary, additional, source of information that can be used to supplement learning from non-social stimuli. Thus, in most extant paradigms, social and non-social learning differ both in terms of social nature (social or non-social) and status (primary or secondary). I conclude that status is an important driver of apparent differences between social and non-social learning. When we account for differences in status, we see that social and non-social learning share common (dopamine-mediated) mechanisms.

SeminarNeuroscience

Cognitive Computational Neuroscience 2023

Cate Hartley, Helen Barron, James McClelland, Tim Kietzmann, Leslie Kaelbling, Stanislas Dehaene
Aug 24, 2023

CCN is an annual conference that serves as a forum for cognitive science, neuroscience, and artificial intelligence researchers dedicated to understanding the computations that underlie complex behavior.

SeminarNeuroscienceRecording

Interacting spiral wave patterns underlie complex brain dynamics and are related to cognitive processing

Pulin Gong
The University of Sydney
Aug 11, 2023

The large-scale activity of the human brain exhibits rich and complex patterns, but the spatiotemporal dynamics of these patterns and their functional roles in cognition remain unclear. Here by characterizing moment-by-moment fluctuations of human cortical functional magnetic resonance imaging signals, we show that spiral-like, rotational wave patterns (brain spirals) are widespread during both resting and cognitive task states. These brain spirals propagate across the cortex while rotating around their phase singularity centres, giving rise to spatiotemporal activity dynamics with non-stationary features. The properties of these brain spirals, such as their rotational directions and locations, are task relevant and can be used to classify different cognitive tasks. We also demonstrate that multiple, interacting brain spirals are involved in coordinating the correlated activations and de-activations of distributed functional regions; this mechanism enables flexible reconfiguration of task-driven activity flow between bottom-up and top-down directions during cognitive processing. Our findings suggest that brain spirals organize complex spatiotemporal dynamics of the human brain and have functional correlates to cognitive processing.

SeminarNeuroscience

Bernstein Student Workshop Series

Cátia Fortunato
Imperial College London
Jun 15, 2023

The Bernstein Student Workshop Series is an initiative of the student members of the Bernstein Network. It provides a unique opportunity to enhance the technical exchange on a peer-to-peer basis. The series is motivated by the idea of bridging the gap between theoretical and experimental neuroscience by bringing together methodological expertise in the network. Unlike conventional workshops, a talented junior scientist will first give a tutorial about a specific theoretical or experimental technique, and then give a talk about their own research to demonstrate how the technique helps to address neuroscience questions. The workshop series is designed to cover a wide range of theoretical and experimental techniques and to elucidate how different techniques can be applied to answer different types of neuroscience questions. Combining the technical tutorial and the research talk, the workshop series aims to promote knowledge sharing in the community and enhance in-depth discussions among students from diverse backgrounds.

ConferenceNeuroscience

COSYNE 2023

Montreal, Canada
Mar 9, 2023

The COSYNE 2023 conference provided an inclusive forum for exchanging experimental and theoretical approaches to problems in systems neuroscience, continuing the tradition of bringing together the computational neuroscience community:contentReference[oaicite:5]{index=5}. The main meeting was held in Montreal followed by post-conference workshops in Mont-Tremblant, fostering intensive discussions and collaboration.

ConferenceNeuroscience

Neuromatch 5

Virtual (online)
Sep 27, 2022

Neuromatch 5 (Neuromatch Conference 2022) was a fully virtual conference focused on computational neuroscience broadly construed, including machine learning work with explicit biological links:contentReference[oaicite:11]{index=11}. After four successful Neuromatch conferences, the fifth edition consolidated proven innovations from past events, featuring a series of talks hosted on Crowdcast and flash talk sessions (pre-recorded videos) with dedicated discussion times on Reddit:contentReference[oaicite:12]{index=12}.

ePosterNeuroscience

Bridging biophysics and computation with differentiable simulation

Michael Deistler, Kyra Kadhim, Jonas Beck, Matthijs Pals, Janne Lappalainen, Manuel Gloeckler, Ziwei Huang, Cornelius Schroeder, Philipp Berens, Pedro Gonçalves, Jakob Macke

Bernstein Conference 2024

ePosterNeuroscience

Complex spatial representations and computations emerge in a memory-augmented network that learns to navigate

Xiangshuai Zeng, Laurenz Wiskott, Sen Cheng

Bernstein Conference 2024

ePosterNeuroscience

Computational implications of motor primitives for cortical motor learning

Natalie Schieferstein, Paul Züge, Raoul-Martin Memmesheimer

Bernstein Conference 2024

ePosterNeuroscience

Computational analysis of optogenetic inhibition of a pyramidal CA1 neuron

Laila Weyn, Thomas Tarnaud, Xavier De Becker, Wout Joseph, Robrecht Raedt, Emmeric Tanghe

Bernstein Conference 2024

ePosterNeuroscience

Computational mechanisms of odor perception and representational drift in rodent olfactory systems

Alexander Roxin, Licheng Zou

Bernstein Conference 2024

ePosterNeuroscience

A computationally efficient simplification of the Brunel-Wang NMDA model: Numerical approach and first results

Jan-Eirik Skaar, Nicolai Haug, Hans Ekkehard Plesser

Bernstein Conference 2024

ePosterNeuroscience

Computational modelling of dentate granule cells reveals Pareto optimal trade-off between pattern separation and energy efficiency (economy)

Martin Mittag, Alexander Bird, Hermann Cuntz, Peter Jedlicka

Bernstein Conference 2024

ePosterNeuroscience

Deep generative networks as a computational approach for global non-linear control modeling in the nematode C. elegans

Doris Voina, Steven Brunton, Jose Kutz

Bernstein Conference 2024

ePosterNeuroscience

Dendritic computation: A comprehensive review of current biological and computational developments

Tim Bax, Pascal Nieters

Bernstein Conference 2024

ePosterNeuroscience

Investigating hippocampal synaptic plasticity in Schizophrenia: a computational and experimental approach using MEA recordings

Sarah Hamdi Cherif, Candice Roux, Valentine Bouet, Jean-Marie Billard, Jérémie Gaidamour, Laure Buhry, Radu Ranta

Bernstein Conference 2024

ePosterNeuroscience

The long and short of dendritic computation: Sequence processing with dendritic plateaus.

Pascal Nieters

Bernstein Conference 2024

ePosterNeuroscience

Non-feedforward architectures enable diverse multisensory computations

Marcus Ghosh, Dan Goodman

Bernstein Conference 2024

ePosterNeuroscience

Physiological Implementation of Synaptic Plasticity at Behavioral Timescales Supports Computational Properties of Place Cell Formation

Hsuan-Pei Huang, Han-Ying Wang, Ching-Tsuey Chen, Ching-Lung Hsu

Bernstein Conference 2024

ePosterNeuroscience

Cerebellum learns to drive cortical dynamics: a computational lesson

Joseph Pemberton,Rui Ponte Costa

COSYNE 2022

ePosterNeuroscience

Computational principles of systems memory consolidation

Jack Lindsey,Ashok Litwin-Kumar

COSYNE 2022

ePosterNeuroscience

Computational strategies and neural correlates of probabilistic reversal learning in mice

Karyna Mishchanchuk,Andrew MacAskill

COSYNE 2022

ePosterNeuroscience

Feedforward and feedback computations in V1 and V2 in a hierarchical Variational Autoencoder

Ferenc Csikor,Balázs Meszéna,Gergő Orbán

COSYNE 2022

ePosterNeuroscience

Feedforward and feedback computations in V1 and V2 in a hierarchical Variational Autoencoder

Ferenc Csikor,Balázs Meszéna,Gergő Orbán

COSYNE 2022

ePosterNeuroscience

Flexible inter-areal computations through low-rank communication subspaces

Joao Barbosa,Srdjan Ostojic

COSYNE 2022

ePosterNeuroscience

Flexible inter-areal computations through low-rank communication subspaces

Joao Barbosa,Srdjan Ostojic

COSYNE 2022

ePosterNeuroscience

Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons

Paul Haider,Benjamin Ellenberger,Laura Kriener,Jakob Jordan,Walter Senn,Mihai A. Petrovici

COSYNE 2022

ePosterNeuroscience

Latent Equilibrium: A unified learning theory for arbitrarily fast computation with arbitrarily slow neurons

Paul Haider,Benjamin Ellenberger,Laura Kriener,Jakob Jordan,Walter Senn,Mihai A. Petrovici

COSYNE 2022

ePosterNeuroscience

Multitask computation in recurrent networks utilizes shared dynamical motifs

Laura Driscoll,Krisha Shenoy,David Sussillo

COSYNE 2022

ePosterNeuroscience

Multitask computation in recurrent networks utilizes shared dynamical motifs

Laura Driscoll,Krisha Shenoy,David Sussillo

COSYNE 2022

ePosterNeuroscience

Probing neural value computations in the nucleus accumbens dopamine signal

Tim Krausz,Alison Comrie,Loren Frank,Nathaniel Daw,Joshua Berke

COSYNE 2022

ePosterNeuroscience

Probing neural value computations in the nucleus accumbens dopamine signal

Tim Krausz,Alison Comrie,Loren Frank,Nathaniel Daw,Joshua Berke

COSYNE 2022

ePosterNeuroscience

In silico manipulation of cortical computation underlying goal-directed learning

Jae Hoon Shin,Sang Wan Lee,Jee Hang Lee

COSYNE 2022

ePosterNeuroscience

In silico manipulation of cortical computation underlying goal-directed learning

Jae Hoon Shin,Sang Wan Lee,Jee Hang Lee

COSYNE 2022

ePosterNeuroscience

Subcortical modulation of cortical dynamics for motor planning: a computational framework

Jorge Jaramillo,Ulises Pereira,Karel Svoboda,Xiao-Jing Wang

COSYNE 2022

ePosterNeuroscience

Subcortical modulation of cortical dynamics for motor planning: a computational framework

Jorge Jaramillo,Ulises Pereira,Karel Svoboda,Xiao-Jing Wang

COSYNE 2022

ePosterNeuroscience

Approximate inference through active computation accounts for human categorization behavior

Xiang Li, Luigi Acerbi, Wei Ji Ma

COSYNE 2023

ePosterNeuroscience

Complex computation from developmental priors

Dániel Barabási, Taliesin Beynon, Nicolas Perez-Nievas, Ádám Katona

COSYNE 2023

ePosterNeuroscience

Computation of abstract context in the midbrain reticular nucleus during perceptual decision-making

Jordan Shaker, Dan Birman, Nicholas Steinmetz

COSYNE 2023

ePosterNeuroscience

Computation with sequences of neural assemblies

Max Dabagia, Christos Papadimitriou, Santosh S. Vempala

COSYNE 2023

ePosterNeuroscience

Computational and behavioral mechanisms underlying selecting, stopping, and switching of actions

Shan Zhong & Vasileios Christopoulos

COSYNE 2023

ePosterNeuroscience

Computational mechanisms underlying thalamic regulation of prefrontal signal-to-noise ratio in decision making

Zhe Chen, Xiaohan Zhang, Michael Halassa

COSYNE 2023

ePosterNeuroscience

Context-Dependent Epoch Codes in Association Cortex Shape Neural Computations

Frederick Berl, Hyojung Seo, Daeyeol Lee, John Murray

COSYNE 2023

ePosterNeuroscience

Dynamical Neural Computation in Predictive Sensorimotor Control

Yun Chen, Yiheng Zhang, He Cui

COSYNE 2023

ePosterNeuroscience

Flexible boolean computation by auditory neurons

Grace Ang & Andriy Kozlov

COSYNE 2023

ePosterNeuroscience

Building mechanistic models of neural computations with simulation-based machine learning

Jakob Macke

Bernstein Conference 2024

computation coverage

95 items

Seminar50
ePoster40
Conference4
+1 more
Domain spotlight

Explore how computation research is advancing inside Neuro.

Visit domain