TopicNeuro

artificial intelligence

50 Seminars4 ePosters2 Positions

Latest

PositionNeuroscience

Burcu Ayşen Ürgen

Bilkent University
Ankara, Turkey
Jan 4, 2026

Bilkent University invites applications for multiple open-rank faculty positions in the Department of Neuroscience. The department plans to expand research activities in certain focus areas and accordingly seeks applications from promising or established scholars who have worked in the following or related fields: Cellular/molecular/developmental neuroscience with a strong emphasis on research involving animal models. Systems/cognitive/computational neuroscience with a strong emphasis on research involving emerging data-driven approaches, including artificial intelligence, robotics, brain-machine interfaces, virtual reality, computational imaging, and theoretical modeling. Candidates with a research focus in those areas whose research has a neuroimaging component are particularly encouraged to apply. The Department’s interdisciplinary Graduate Program in Neuroscience that offers Master's and PhD degrees was established in 2014. The department is affiliated with Bilkent’s Aysel Sabuncu Brain Research Center (ASBAM) and the National Magnetic Resonance Research Center (UMRAM). Faculty affiliated with the department has the privilege to access state-of-the-art research facilities in these centers, including animal facilities, cellular/molecular laboratory infrastructure, psychophysics laboratories, eyetracking laboratories, EEG laboratories, a human-robot interaction laboratory, and two MRI scanners (3T and 1.5T).

PositionNeuroscience

N/A

Center for Neuroscience and Cell Biology of the University of Coimbra (CNC-UC)
Coimbra and Cantanhede, University of Coimbra
Jan 4, 2026

The PostDoctoral researcher will conduct research activities in modelling and simulation of reward-modulated prosocial behavior and decision-making. The position is part of a larger effort to uncover the computational and mechanistic bases of prosociality and empathy at the behavioral and circuit levels. The role involves working at the interface between experimental data (animal behavior and electrophysiology) and theoretical modelling, with an emphasis on Multi-Agent Reinforcement Learning and neural population dynamics.

SeminarNeuroscience

Active Predictive Coding and the Primacy of Actions in Natural and Artificial Intelligence

Rajesh Rao
University of Washington
Apr 7, 2025
SeminarNeuroscienceRecording

Brain Emulation Challenge Workshop

Randal A. Koene
Co-Founder and Chief Science Officer, Carboncopies
Feb 21, 2025

Brain Emulation Challenge workshop will tackle cutting-edge topics such as ground-truthing for validation, leveraging artificial datasets generated from virtual brain tissue, and the transformative potential of virtual brain platforms, such as applied to the forthcoming Brain Emulation Challenge.

SeminarNeuroscience

LLMs and Human Language Processing

Maryia Toneva, Ariel Goldstein, Jean-Remi King
Max Planck Institute of Software Systems; Hebrew University; École Normale Supérieure
Nov 29, 2024

This webinar convened researchers at the intersection of Artificial Intelligence and Neuroscience to investigate how large language models (LLMs) can serve as valuable “model organisms” for understanding human language processing. Presenters showcased evidence that brain recordings (fMRI, MEG, ECoG) acquired while participants read or listened to unconstrained speech can be predicted by representations extracted from state-of-the-art text- and speech-based LLMs. In particular, text-based LLMs tend to align better with higher-level language regions, capturing more semantic aspects, while speech-based LLMs excel at explaining early auditory cortical responses. However, purely low-level features can drive part of these alignments, complicating interpretations. New methods, including perturbation analyses, highlight which linguistic variables matter for each cortical area and time scale. Further, “brain tuning” of LLMs—fine-tuning on measured neural signals—can improve semantic representations and downstream language tasks. Despite open questions about interpretability and exact neural mechanisms, these results demonstrate that LLMs provide a promising framework for probing the computations underlying human language comprehension and production at multiple spatiotemporal scales.

SeminarNeuroscience

Trends in NeuroAI - Brain-like topography in transformers (Topoformer)

Nicholas Blauch
Jun 7, 2024

Dr. Nicholas Blauch will present on his work "Topoformer: Brain-like topographic organization in transformer language models through spatial querying and reweighting". Dr. Blauch is a postdoctoral fellow in the Harvard Vision Lab advised by Talia Konkle and George Alvarez. Paper link: https://openreview.net/pdf?id=3pLMzgoZSA Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).

SeminarNeuroscience

Generative models for video games (rescheduled)

Katja Hoffman
Microsoft Research
May 22, 2024

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

SeminarNeuroscience

Generative models for video games

Katja Hoffman
Microsoft Research
May 1, 2024

Developing agents capable of modeling complex environments and human behaviors within them is a key goal of artificial intelligence research. Progress towards this goal has exciting potential for applications in video games, from new tools that empower game developers to realize new creative visions, to enabling new kinds of immersive player experiences. This talk focuses on recent advances of my team at Microsoft Research towards scalable machine learning architectures that effectively capture human gameplay data. In the first part of my talk, I will focus on diffusion models as generative models of human behavior. Previously shown to have impressive image generation capabilities, I present insights that unlock applications to imitation learning for sequential decision making. In the second part of my talk, I discuss a recent project taking ideas from language modeling to build a generative sequence model of an Xbox game.

SeminarNeuroscience

Learning produces a hippocampal cognitive map in the form of an orthogonalized state machine

Nelson Spruston
Janelia, Ashburn, USA
Mar 6, 2024

Cognitive maps confer animals with flexible intelligence by representing spatial, temporal, and abstract relationships that can be used to shape thought, planning, and behavior. Cognitive maps have been observed in the hippocampus, but their algorithmic form and the processes by which they are learned remain obscure. Here, we employed large-scale, longitudinal two-photon calcium imaging to record activity from thousands of neurons in the CA1 region of the hippocampus while mice learned to efficiently collect rewards from two subtly different versions of linear tracks in virtual reality. The results provide a detailed view of the formation of a cognitive map in the hippocampus. Throughout learning, both the animal behavior and hippocampal neural activity progressed through multiple intermediate stages, gradually revealing improved task representation that mirrored improved behavioral efficiency. The learning process led to progressive decorrelations in initially similar hippocampal neural activity within and across tracks, ultimately resulting in orthogonalized representations resembling a state machine capturing the inherent struture of the task. We show that a Hidden Markov Model (HMM) and a biologically plausible recurrent neural network trained using Hebbian learning can both capture core aspects of the learning dynamics and the orthogonalized representational structure in neural activity. In contrast, we show that gradient-based learning of sequence models such as Long Short-Term Memory networks (LSTMs) and Transformers do not naturally produce such orthogonalized representations. We further demonstrate that mice exhibited adaptive behavior in novel task settings, with neural activity reflecting flexible deployment of the state machine. These findings shed light on the mathematical form of cognitive maps, the learning rules that sculpt them, and the algorithms that promote adaptive behavior in animals. The work thus charts a course toward a deeper understanding of biological intelligence and offers insights toward developing more robust learning algorithms in artificial intelligence.

SeminarNeuroscience

Trends in NeuroAI - Unified Scalable Neural Decoding (POYO)

Mehdi Azabou
Feb 22, 2024

Lead author Mehdi Azabou will present on his work "POYO-1: A Unified, Scalable Framework for Neural Population Decoding" (https://poyo-brain.github.io/). Mehdi is an ML PhD student at Georgia Tech advised by Dr. Eva Dyer. Paper link: https://arxiv.org/abs/2310.16046 Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri | https://groups.google.com/g/medarc-fmri).

SeminarNeuroscienceRecording

Reimagining the neuron as a controller: A novel model for Neuroscience and AI

Dmitri 'Mitya' Chklovskii
Flatiron Institute, Center for Computational Neuroscience
Feb 5, 2024

We build upon and expand the efficient coding and predictive information models of neurons, presenting a novel perspective that neurons not only predict but also actively influence their future inputs through their outputs. We introduce the concept of neurons as feedback controllers of their environments, a role traditionally considered computationally demanding, particularly when the dynamical system characterizing the environment is unknown. By harnessing a novel data-driven control framework, we illustrate the feasibility of biological neurons functioning as effective feedback controllers. This innovative approach enables us to coherently explain various experimental findings that previously seemed unrelated. Our research has profound implications, potentially revolutionizing the modeling of neuronal circuits and paving the way for the creation of alternative, biologically inspired artificial neural networks.

SeminarNeuroscience

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Reese Kneeland
Jan 5, 2024

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: Brain-optimized inference improves reconstructions of fMRI brain activity Abstract: The release of large datasets and developments in AI have led to dramatic improvements in decoding methods that reconstruct seen images from human brain activity. We evaluate the prospect of further improving recent decoding methods by optimizing for consistency between reconstructions and brain activity during inference. We sample seed reconstructions from a base decoding method, then iteratively refine these reconstructions using a brain-optimized encoding model that maps images to brain activity. At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration. We select those that best approximate the measured brain activity when passed through our encoding model, and use these images for structural guidance during the generation of the small library in the next iteration. We reduce the stochasticity of the image distribution at each iteration, and stop when a criterion on the "width" of the image distribution is met. We show that when this process is applied to recent decoding methods, it outperforms the base decoding method as measured by human raters, a variety of image feature metrics, and alignment to brain activity. These results demonstrate that reconstruction quality can be significantly improved by explicitly aligning decoding distributions to brain activity distributions, even when the seed reconstruction is output from a state-of-the-art decoding algorithm. Interestingly, the rate of refinement varies systematically across visual cortex, with earlier visual areas generally converging more slowly and preferring narrower image distributions, relative to higher-level brain areas. Brain-optimized inference thus offers a succinct and novel method for improving reconstructions and exploring the diversity of representations across visual brain areas. Speaker: Reese Kneeland is a Ph.D. student at the University of Minnesota working in the Naselaris lab. Paper link: https://arxiv.org/abs/2312.07705

SeminarNeuroscience

Trends in NeuroAI - Meta's MEG-to-image reconstruction

Paul Scotti
Dec 7, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). This will be an informal journal club presentation, we do not have an author of the paper joining us. Title: Brain decoding: toward real-time reconstruction of visual perception Abstract: In the past five years, the use of generative and foundational AI systems has greatly improved the decoding of brain activity. Visual perception, in particular, can now be decoded from functional Magnetic Resonance Imaging (fMRI) with remarkable fidelity. This neuroimaging technique, however, suffers from a limited temporal resolution (≈0.5 Hz) and thus fundamentally constrains its real-time usage. Here, we propose an alternative approach based on magnetoencephalography (MEG), a neuroimaging device capable of measuring brain activity with high temporal resolution (≈5,000 Hz). For this, we develop an MEG decoding model trained with both contrastive and regression objectives and consisting of three modules: i) pretrained embeddings obtained from the image, ii) an MEG module trained end-to-end and iii) a pretrained image generator. Our results are threefold: Firstly, our MEG decoder shows a 7X improvement of image-retrieval over classic linear decoders. Second, late brain responses to images are best decoded with DINOv2, a recent foundational image model. Third, image retrievals and generations both suggest that MEG signals primarily contain high-level visual features, whereas the same approach applied to 7T fMRI also recovers low-level features. Overall, these results provide an important step towards the decoding - in real time - of the visual processes continuously unfolding within the human brain. Speaker: Dr. Paul Scotti (Stability AI, MedARC) Paper link: https://arxiv.org/abs/2310.19812

SeminarNeuroscience

Trends in NeuroAI - SwiFT: Swin 4D fMRI Transformer

Junbeom Kwon
Nov 21, 2023

Trends in NeuroAI is a reading group hosted by the MedARC Neuroimaging & AI lab (https://medarc.ai/fmri). Title: SwiFT: Swin 4D fMRI Transformer Abstract: Modeling spatiotemporal brain dynamics from high-dimensional data, such as functional Magnetic Resonance Imaging (fMRI), is a formidable task in neuroscience. Existing approaches for fMRI analysis utilize hand-crafted features, but the process of feature extraction risks losing essential information in fMRI scans. To address this challenge, we present SwiFT (Swin 4D fMRI Transformer), a Swin Transformer architecture that can learn brain dynamics directly from fMRI volumes in a memory and computation-efficient manner. SwiFT achieves this by implementing a 4D window multi-head self-attention mechanism and absolute positional embeddings. We evaluate SwiFT using multiple large-scale resting-state fMRI datasets, including the Human Connectome Project (HCP), Adolescent Brain Cognitive Development (ABCD), and UK Biobank (UKB) datasets, to predict sex, age, and cognitive intelligence. Our experimental outcomes reveal that SwiFT consistently outperforms recent state-of-the-art models. Furthermore, by leveraging its end-to-end learning capability, we show that contrastive loss-based self-supervised pre-training of SwiFT can enhance performance on downstream tasks. Additionally, we employ an explainable AI method to identify the brain regions associated with sex classification. To our knowledge, SwiFT is the first Swin Transformer architecture to process dimensional spatiotemporal brain functional data in an end-to-end fashion. Our work holds substantial potential in facilitating scalable learning of functional brain imaging in neuroscience research by reducing the hurdles associated with applying Transformer models to high-dimensional fMRI. Speaker: Junbeom Kwon is a research associate working in Prof. Jiook Cha’s lab at Seoul National University. Paper link: https://arxiv.org/abs/2307.05916

SeminarNeuroscience

BrainLM Journal Club

Connor Lane
Sep 29, 2023

Connor Lane will lead a journal club on the recent BrainLM preprint, a foundation model for fMRI trained using self-supervised masked autoencoder training. Preprint: https://www.biorxiv.org/content/10.1101/2023.09.12.557460v1 Tweeprint: https://twitter.com/david_van_dijk/status/1702336882301112631?t=Q2-U92-BpJUBh9C35iUbUA&s=19

SeminarNeuroscience

Algonauts 2023 winning paper journal club (fMRI encoding models)

Huzheng Yang, Paul Scotti
Aug 18, 2023

Algonauts 2023 was a challenge to create the best model that predicts fMRI brain activity given a seen image. Huze team dominated the competition and released a preprint detailing their process. This journal club meeting will involve open discussion of the paper with Q/A with Huze. Paper: https://arxiv.org/pdf/2308.01175.pdf Related paper also from Huze that we can discuss: https://arxiv.org/pdf/2307.14021.pdf

SeminarNeuroscience

1.8 billion regressions to predict fMRI (journal club)

Mihir Tripathy
Jul 28, 2023

Public journal club where this week Mihir will present on the 1.8 billion regressions paper (https://www.biorxiv.org/content/10.1101/2022.03.28.485868v2), where the authors use hundreds of pretrained model embeddings to best predict fMRI activity.

SeminarNeuroscienceRecording

In search of the unknown: Artificial intelligence and foraging

Nathan Wispinski & Paulo Bruno Serafim
University of Alberta & Gran Sasso Science Institute
Jul 11, 2023
SeminarNeuroscienceRecording

Consciousness in the age of mechanical minds

Robert Pepperell
Cardiff Metropolitan University
Jun 1, 2023

We are now clearly entering a new age in our relationship with machines. The power of AI natural language processors and image generators has rapidly exceeded the expectations of even those who developed them. Serious questions are now being asked about the extent to which machines could become — or perhaps already are — sentient or conscious. Do AI machines understand the instructions they are given and the answers they provide? In this talk I will consider the prospects for conscious machines, by which I mean machines that have feelings, know about their own existence, and about ours. I will suggest that the recent focus on information processing in models of consciousness, in which the brain is treated as a kind of digital computer, have mislead us about the nature of consciousness and how it is produced in biological systems. Treating the brain as an energy processing system is more likely to yield answers to these fundamental questions and help us understand how and when machines might become minds.

SeminarNeuroscienceRecording

AI for Multi-centre Epilepsy Lesion Detection on MRI

Sophie Adler
Mar 1, 2023

Epilepsy surgery is a safe but underutilised treatment for drug-resistant focal epilepsy. One challenge in the presurgical evaluation of patients with drug-resistant epilepsy are patients considered “MRI negative”, i.e. where a structural brain abnormality has not been identified on MRI. A major pathology in “MRI negative” patients is focal cortical dysplasia (FCD), where lesions are often small or subtle and easily missed by visual inspection. In recent years, there has been an explosion in artificial intelligence (AI) research in the field of healthcare. Automated FCD detection is an area where the application of AI may translate into significant improvements in the presurgical evaluation of patients with focal epilepsy. I will provide an overview of our automated FCD detection work, the Multicentre Epilepsy Lesion Detection (MELD) project and how AI algorithms are beginning to be integrated into epilepsy presurgical planning at Great Ormond Street Hospital and elsewhere around the world. Finally, I will discuss the challenges and future work required to bring AI to the forefront of care for patients with epilepsy.

SeminarNeuroscienceRecording

Does subjective time interact with the heart rate?

Saeedeh Sadegh
Cornell University, New York
Jan 25, 2023

Decades of research have investigated the relationship between perception of time and heart rate with often mixed results. In search of such a relationship, I will present my far journey between two projects: from time perception in the realistic VR experience of crowded subway trips in the order of minutes (project 1); to the perceived duration of sub-second white noise tones (project 2). Heart rate had multiple concurrent relationships with subjective temporal distortions for the sub-second tones, while the effects were lacking or weak for the supra-minute subway trips. What does the heart have to do with sub-second time perception? We addressed this question with a cardiac drift-diffusion model, demonstrating the sensory accumulation of temporal evidence as a function of heart rate.

SeminarNeuroscienceRecording

On the link between conscious function and general intelligence in humans and machines

Arthur Juliani
Microsoft Research
Nov 18, 2022

In popular media, there is often a connection drawn between the advent of awareness in artificial agents and those same agents simultaneously achieving human or superhuman level intelligence. In this talk, I will examine the validity and potential application of this seemingly intuitive link between consciousness and intelligence. I will do so by examining the cognitive abilities associated with three contemporary theories of conscious function: Global Workspace Theory (GWT), Information Generation Theory (IGT), and Attention Schema Theory (AST), and demonstrating that all three theories specifically relate conscious function to some aspect of domain-general intelligence in humans. With this insight, we will turn to the field of Artificial Intelligence (AI) and find that, while still far from demonstrating general intelligence, many state-of-the-art deep learning methods have begun to incorporate key aspects of each of the three functional theories. Given this apparent trend, I will use the motivating example of mental time travel in humans to propose ways in which insights from each of the three theories may be combined into a unified model. I believe that doing so can enable the development of artificial agents which are not only more generally intelligent but are also consistent with multiple current theories of conscious function.

SeminarNeuroscienceRecording

Do large language models solve verbal analogies like children do?

Claire Stevenson
University of Amsterdam
Nov 17, 2022

Analogical reasoning –learning about new things by relating it to previous knowledge– lies at the heart of human intelligence and creativity and forms the core of educational practice. Children start creating and using analogies early on, making incredible progress moving from associative processes to successful analogical reasoning. For example, if we ask a four-year-old “Horse belongs to stable like chicken belongs to …?” they may use association and reply “egg”, whereas older children will likely give the intended relational response “chicken coop” (or other term to refer to a chicken’s home). Interestingly, despite state-of-the-art AI-language models having superhuman encyclopedic knowledge and superior memory and computational power, our pilot studies show that these large language models often make mistakes providing associative rather than relational responses to verbal analogies. For example, when we asked four- to eight-year-olds to solve the analogy “body is to feet as tree is to …?” they responded “roots” without hesitation, but large language models tend to provide more associative responses such as “leaves”. In this study we examine the similarities and differences between children's and six large language models' (Dutch/multilingual models: RobBERT, BERT-je, M-BERT, GPT-2, M-GPT, Word2Vec and Fasttext) responses to verbal analogies extracted from an online adaptive learning environment, where >14,000 7-12 year-olds from the Netherlands solved 20 or more items from a database of 900 Dutch language verbal analogies.

SeminarNeuroscience

Lifelong Learning AI via neuro inspired solutions

Hava Siegelmann
University of Massachusetts Amherst
Oct 27, 2022

AI embedded in real systems, such as in satellites, robots and other autonomous devices, must make fast, safe decisions even when the environment changes, or under limitations on the available power; to do so, such systems must be adaptive in real time. To date, edge computing has no real adaptivity – rather the AI must be trained in advance, typically on a large dataset with much computational power needed; once fielded, the AI is frozen: It is unable to use its experience to operate if environment proves outside its training or to improve its expertise; and worse, since datasets cannot cover all possible real-world situations, systems with such frozen intelligent control are likely to fail. Lifelong Learning is the cutting edge of artificial intelligence - encompassing computational methods that allow systems to learn in runtime and incorporate learning for application in new, unanticipated situations. Until recently, this sort of computation has been found exclusively in nature; thus, Lifelong Learning looks to nature, and in particular neuroscience, for its underlying principles and mechanisms and then translates them to this new technology. Our presentation will introduce a number of state-of-the-art approaches to achieve AI adaptive learning, including from the DARPA’s L2M program and subsequent developments. Many environments are affected by temporal changes, such as the time of day, week, season, etc. A way to create adaptive systems which are both small and robust is by making them aware of time and able to comprehend temporal patterns in the environment. We will describe our current research in temporal AI, while also considering power constraints.

SeminarNeuroscienceRecording

Associative memory of structured knowledge

Julia Steinberg
Princeton University
Oct 26, 2022

A long standing challenge in biological and artificial intelligence is to understand how new knowledge can be constructed from known building blocks in a way that is amenable for computation by neuronal circuits. Here we focus on the task of storage and recall of structured knowledge in long-term memory. Specifically, we ask how recurrent neuronal networks can store and retrieve multiple knowledge structures. We model each structure as a set of binary relations between events and attributes (attributes may represent e.g., temporal order, spatial location, role in semantic structure), and map each structure to a distributed neuronal activity pattern using a vector symbolic architecture (VSA) scheme. We then use associative memory plasticity rules to store the binarized patterns as fixed points in a recurrent network. By a combination of signal-to-noise analysis and numerical simulations, we demonstrate that our model allows for efficient storage of these knowledge structures, such that the memorized structures as well as their individual building blocks (e.g., events and attributes) can be subsequently retrieved from partial retrieving cues. We show that long-term memory of structured knowledge relies on a new principle of computation beyond the memory basins. Finally, we show that our model can be extended to store sequences of memories as single attractors.

SeminarNeuroscienceRecording

What do neurons want?

Gabriel Kreiman
Harvard
Oct 25, 2022
SeminarNeuroscienceRecording

AI-assisted language learning: Assessing learners who memorize and reason by analogy

Pierre-Alexandre Murena
University of Helsinki
Oct 5, 2022

Vocabulary learning applications like Duolingo have millions of users around the world, but yet are based on very simple heuristics to choose teaching material to provide to their users. In this presentation, we will discuss the possibility to develop more advanced artificial teachers, which would be based on modeling of the learner’s inner characteristics. In the case of teaching vocabulary, understanding how the learner memorizes is enough. When it comes to picking grammar exercises, it becomes essential to assess how the learner reasons, in particular by analogy. This second application will illustrate how analogical and case-based reasoning can be employed in an alternative way in education: not as the teaching algorithm, but as a part of the learner’s model.

SeminarNeuroscienceRecording

Learning static and dynamic mappings with local self-supervised plasticity

Pantelis Vafeidis
California Institute of Technology
Sep 7, 2022

Animals exhibit remarkable learning capabilities with little direct supervision. Likewise, self-supervised learning is an emergent paradigm in artificial intelligence, closing the performance gap to supervised learning. In the context of biology, self-supervised learning corresponds to a setting where one sense or specific stimulus may serve as a supervisory signal for another. After learning, the latter can be used to predict the former. On the implementation level, it has been demonstrated that such predictive learning can occur at the single neuron level, in compartmentalized neurons that separate and associate information from different streams. We demonstrate the power such self-supervised learning over unsupervised (Hebb-like) learning rules, which depend heavily on stimulus statistics, in two examples: First, in the context of animal navigation where predictive learning can associate internal self-motion information always available to the animal with external visual landmark information, leading to accurate path-integration in the dark. We focus on the well-characterized fly head direction system and show that our setting learns a connectivity strikingly similar to the one reported in experiments. The mature network is a quasi-continuous attractor and reproduces key experiments in which optogenetic stimulation controls the internal representation of heading, and where the network remaps to integrate with different gains. Second, we show that incorporating global gating by reward prediction errors allows the same setting to learn conditioning at the neuronal level with mixed selectivity. At its core, conditioning entails associating a neural activity pattern induced by an unconditioned stimulus (US) with the pattern arising in response to a conditioned stimulus (CS). Solving the generic problem of pattern-to-pattern associations naturally leads to emergent cognitive phenomena like blocking, overshadowing, saliency effects, extinction, interstimulus interval effects etc. Surprisingly, we find that the same network offers a reductionist mechanism for causal inference by resolving the post hoc, ergo propter hoc fallacy.

SeminarNeuroscienceRecording

A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens

Lenore and Manuel Blum
Carnegie Mellon University
Aug 5, 2022

We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119

SeminarNeuroscienceRecording

Careers for neuroscience in Artificial Intelligence

Rik Henson (and others)
University of Cambridge
Jun 17, 2022

The purpose of this event is twofold: to raise awareness of careers in AI to neuroscience postgraduate and Early Career Researchers (ECRs), and to give the chance for commercial organisations to acquire and diversify their talent pool.  We know that our early career members are highly motivated and interested in different career pathways, and wish to help them fulfil their ambitions. This will be a hybrid event held in person at Arca Blanca, Covent Garden, London and also available online. FREE for BNA members!

SeminarNeuroscience

Faking emotions and a therapeutic role for robots and chatbots: Ethics of using AI in psychotherapy

Bipin Indurkhya
Cognitive Science Department, Jagiellonian University, Kraków
May 19, 2022

In recent years, there has been a proliferation of social robots and chatbots that are designed so that users make an emotional attachment with them. This talk will start by presenting the first such chatbot, a program called Eliza designed by Joseph Weizenbaum in the mid 1960s. Then we will look at some recent robots and chatbots with Eliza-like interfaces and examine their benefits as well as various ethical issues raised by deploying such systems.

SeminarNeuroscience

Interdisciplinary College

Tarek Besold, Suzanne Dikker, Astrid Prinz, Fynn-Mathis Trautwein, Niklas Keller, Ida Momennejad, Georg von Wichert
Mar 7, 2022

The Interdisciplinary College is an annual spring school which offers a dense state-of-the-art course program in neurobiology, neural computation, cognitive science/psychology, artificial intelligence, machine learning, robotics and philosophy. It is aimed at students, postgraduates and researchers from academia and industry. This year's focus theme "Flexibility" covers (but not be limited to) the nervous system, the mind, communication, and AI & robotics. All this will be packed into a rich, interdisciplinary program of single- and multi-lecture courses, and less traditional formats.

SeminarNeuroscienceRecording

Implementing structure mapping as a prior in deep learning models for abstract reasoning

Shashank Shekhar
University of Guelph
Mar 3, 2022

Building conceptual abstractions from sensory information and then reasoning about them is central to human intelligence. Abstract reasoning both relies on, and is facilitated by, our ability to make analogies about concepts from known domains to novel domains. Structure Mapping Theory of human analogical reasoning posits that analogical mappings rely on (higher-order) relations and not on the sensory content of the domain. This enables humans to reason systematically about novel domains, a problem with which machine learning (ML) models tend to struggle. We introduce a two-stage neural net framework, which we label Neural Structure Mapping (NSM), to learn visual analogies from Raven's Progressive Matrices, an abstract visual reasoning test of fluid intelligence. Our framework uses (1) a multi-task visual relationship encoder to extract constituent concepts from raw visual input in the source domain, and (2) a neural module net analogy inference engine to reason compositionally about the inferred relation in the target domain. Our NSM approach (a) isolates the relational structure from the source domain with high accuracy, and (b) successfully utilizes this structure for analogical reasoning in the target domain.

SeminarNeuroscienceRecording

Analogical Reasoning with Neuro-Symbolic AI

Hiroshi Honda
Keio University
Feb 23, 2022

Knowledge discovery with computers requires a huge amount of search. Analogical reasoning is effective for efficient knowledge discovery. Therefore, we proposed analogical reasoning systems based on first-order predicate logic using Neuro-Symbolic AI. Neuro-Symbolic AI is a combination of Symbolic AI and artificial neural networks and has features that are easy for human interpretation and robust against data ambiguity and errors. We have implemented analogical reasoning systems by Neuro-symbolic AI models with word embedding which can represent similarity between words. Using the proposed systems, we efficiently extracted unknown rules from knowledge bases described in Prolog. The proposed method is the first case of analogical reasoning based on the first-order predicate logic using deep learning.

SeminarNeuroscienceRecording

Human-like scene interpretation by a brain-inspired model

Shimon Ullman
Weizmann Inst.
Feb 15, 2022
SeminarNeuroscienceRecording

Embodied Artificial Intelligence: Building brain and body together in bio-inspired robots

Fumiya Iida
Department of Engineering
Nov 16, 2021

TBC

SeminarNeuroscience

Can connectomics help us understand the brain and sustain the revolution in AI?

Moritz Helmstaedter, Grace Lindsay, Tony Zador
Nov 3, 2021

3 short talks and a panel discussion on the topic of "Can connectomics help us understand the brain and sustain the revolution in AI?" Expect beautiful connectomics data, provocative dreaming, realistic critiques and everything in between. Students & post-docs, stay on to meet our 3 amazing speakers. Moderator: Dr Greg Jefferis https://www2.mrc-lmb.cam.ac.uk/group-leaders/h-to-m/gregory-jefferis/

SeminarNeuroscienceRecording

Storythinking: Why Your Brain is Creative in Ways that Computer AI Can't Ever Be

Angus Fletcher
Ohio State
Sep 1, 2021

Computer AI thinks differently from us, which is why it's such a useful tool. Thanks to the ingenuity of human programmers, AI's different method of thinking has made humans redundant at certain human tasks, such as chess. Yet there are mechanical limits to how far AI can replicate the products of human thinking. In this talk, we'll trace one such limit by exploring how AI and humans create differently. Humans create by reverse-engineering tools or behaviors to accomplish new actions. AI creates by mix-and-matching pieces of preexisting structures and labeling which combos are associated with positive and negative results. This different procedure is why AI cannot (and will never) learn to innovate technology or tactics and why it also cannot (and will never) learn to generate narratives (including novels, business plans, and scientific hypotheses). It also serves as a case study in why there's no reason to believe in "general intelligence" and why computer AI would have to partner with other mechanical forms of AI (run on non-computer hardware that, as of yet, does not exist, and would require humans to invent) for AI to take over the globe.

SeminarNeuroscienceRecording

Zero-shot visual reasoning with probabilistic analogical mapping

Taylor Webb
UCLA
Jul 1, 2021

There has been a recent surge of interest in the question of whether and how deep learning algorithms might be capable of abstract reasoning, much of which has centered around datasets based on Raven’s Progressive Matrices (RPM), a visual analogy problem set commonly employed to assess fluid intelligence. This has led to the development of algorithms that are capable of solving RPM-like problems directly from pixel-level inputs. However, these algorithms require extensive direct training on analogy problems, and typically generalize poorly to novel problem types. This is in stark contrast to human reasoners, who are capable of solving RPM and other analogy problems zero-shot — that is, with no direct training on those problems. Indeed, it’s this capacity for zero-shot reasoning about novel problem types, i.e. fluid intelligence, that RPM was originally designed to measure. I will present some results from our recent efforts to model this capacity for zero-shot reasoning, based on an extension of a recently proposed approach to analogical mapping we refer to as Probabilistic Analogical Mapping (PAM). Our RPM model uses deep learning to extract attributed graph representations from pixel-level inputs, and then performs alignment of objects between source and target analogs using gradient descent to optimize a graph-matching objective. This extended version of PAM features a number of new capabilities that underscore the flexibility of the overall approach, including 1) the capacity to discover solutions that emphasize either object similarity or relation similarity, based on the demands of a given problem, 2) the ability to extract a schema representing the overall abstract pattern that characterizes a problem, and 3) the ability to directly infer the answer to a problem, rather than relying on a set of possible answer choices. This work suggests that PAM is a promising framework for modeling human zero-shot reasoning.

SeminarNeuroscience

Dynamical Neuromorphic Systems

Julie Grollier
CNRS/Thales lab, Palaiseau, France
Jun 14, 2021

In this talk, I aim to show that the dynamical properties of emerging nanodevices can accelerate the development of smart, and environmentally friendly chips that inherently learn through their physics. The goal of neuromorphic computing is to draw inspiration from the architecture of the brain to build low-power circuits for artificial intelligence. I will first give a brief overview of the state of the art of neuromorphic computing, highlighting the opportunities offered by emerging nanodevices in this field, and the associated challenges. I will then show that the intrinsic dynamical properties of these nanodevices can be exploited at the device and algorithmic level to assemble systems that infer and learn though their physics. I will illustrate these possibilities with examples from our work on spintronic neural networks that communicate and compute through their microwave oscillations, and on an algorithm called Equilibrium Propagation that minimizes both the error and energy of a dynamical system.

SeminarNeuroscienceRecording

AI-guided solutions for early detection of neurodegenerative disorders

Zoe Kourtzi
Department of Psychology, University of Cambridge
May 25, 2021

Despite the importance of early diagnosis of dementia for prognosis and personalised interventions, we still lack robust tools for predicting individual progression to dementia. We propose a trajectory modelling approach that mines multimodal data from patients at early dementia stages to derive individualised prognostic scores of cognitive decline Our approach has potential to facilitate effective stratification of individuals based on prognostic disease trajectories, reducing patient misclassification with important implications for clinical practice.

ePosterNeuroscience

Non-invasive brain-machine interface control with artificial intelligence copilots

Johannes Lee, Sangjoon Lee, Abhishek Mishra, Xu Yan, Brandon McMahan, Brent Gaisford, Charles Kobashigawa, Mike Qu, Chang Xie, Jonathan Kao

COSYNE 2025

ePosterNeuroscience

Availability of information on artificial intelligence-enhanced hearing aids: A social media analysis

Joanie Ferland, Ariane Blouin, Matthieu J. Guitton, Andréanne Sharp

FENS Forum 2024

ePosterNeuroscience

Constructing an artificial intelligence algorithm based on awake mouse brain calcium imaging as a rapid screening platform for the development of Parkinson's disease drugs

Shiu-Hwa Yeh, Tung Chun-Wei

FENS Forum 2024

ePosterNeuroscience

Development of NTS2-selective non-opioid analgesics using artificial intelligence

Frédérique Lussier, Hadrien Mary, Alexandre Murza, Jean-Michel Longpré, Therence Bois, Sébastien Giguère, Pierre-Luc Boudreault, Philippe Sarret

FENS Forum 2024

artificial intelligence coverage

56 items

Seminar50
ePoster4
Position2
Domain spotlight

Explore how artificial intelligence research is advancing inside Neuro.

Visit domain