mathematics
Latest
Eero Simoncelli, Ph.D.
The Center for Neural Science at New York University (NYU), jointly with the Center for Computational Neuroscience (CCN) at the Flatiron Institute of the Simons Foundation, invites applications for an open rank joint position, with a preference for junior or mid-career candidates. We seek exceptional candidates that use computational frameworks to develop concepts, models, and tools for understanding brain function. Areas of interest include sensory representation and perception, memory, decision-making, adaptation and learning, and motor control. A Ph.D. in a relevant field, such as neuroscience, engineering, physics or applied mathematics, is required. Review of applications will begin 28 March 2021. Further information: * Joint position: https://apply.interfolio.com/83845 * NYU Center for Neural Science: https://www.cns.nyu.edu/ * Flatiron Institute Center for Computational Neuroscience: https://www.simonsfoundation.org/flatiron/center-for-computational-neuroscience/
N/A
New York University is seeking exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests.
Geoffrey J Goodhill
An NIH-funded collaboration between David Prober (Caltech), Thai Truong (USC) and Geoff Goodhill (Washington University in St Louis) aims to gain new insight into the neural circuits underlying sleep, through a combination of whole-brain neural recordings in zebrafish and theoretical/computational modeling. A postdoc position is available in the Goodhill lab to contribute to the modeling and computational analysis components. Using novel 2-photon imaging technologies Prober and Truong are recording from the entire larval zebrafish brain at single-neuron resolution continuously for long periods of time, examining neural circuit activity during normal day-night cycles and in response to genetic and pharmacological perturbations. The Goodhill lab is analyzing the resulting huge datasets using a variety of sophisticated computational approaches, and using these results to build new theoretical models that reveal how neural circuits interact to govern sleep.
N/A
New York University is seeking exceptional PhD candidates with strong quantitative training (e.g., physics, mathematics, engineering) coupled with a clear interest in scientific study of the brain. Doctoral programs are flexible, allowing students to pursue research across departmental boundaries. Admissions are handled separately by each department, and students interested in pursuing graduate studies should submit an application to the program that best fits their goals and interests.
Professor Geoffrey J Goodhill
The Department of Neuroscience at Washington University School of Medicine is currently recruiting investigators with the passion to create knowledge, pursue bold visions, and challenge canonical thinking as we expand into our new 600,000 sq ft purpose-built neurosciences research building. We are now seeking a tenure-track investigator at the level of Assistant Professor to develop an innovative research program in Theoretical/Computational Neuroscience. The successful candidates will join a thriving theoretical/computational neuroscience community at Washington University, including the new Center for Theoretical and Computational Neuroscience. In addition, the Department also has world-class research strengths in systems, circuits and behavior, cellular and molecular neuroscience using a variety of animal models including worms, flies, zebrafish, rodents and non-human primates. We are particularly interested in outstanding researchers who are both creative and collaborative.
Jean-Pascal Pfister
The Theoretical Neuroscience Group of the University of Bern is seeking applications for a PhD position, funded by a Swiss National Science Foundation grant titled “Why Spikes?”. This project aims at answering a nearly century-old question in Neuroscience: “What are spikes good for?”. Indeed, since the discovery of action potentials by Lord Adrian in 1926, it has remained largely unknown what the benefits of spiking neurons are, when compared to analog neurons. Traditionally, it has been argued that spikes are good for long-distance communication or for temporally precise computation. However, there is no systematic study that quantitatively compares the communication as well as the computational benefits of spiking neuron w.r.t analog neurons. The aim of the project is to systematically quantify the benefits of spiking at various levels by developing and analyzing appropriate mathematical models. The PhD student will be supervised by Prof. Jean-Pascal Pfister (Theoretical Neuroscience Group, Department of Physiology, University of Bern). The project will involve close collaborations within a highly motivated team as well as regular exchange of ideas with the other theory groups at the institute.
Professor Geoffrey J Goodhill
The Department of Neuroscience at Washington University School of Medicine is seeking a tenure-track investigator at the level of Assistant Professor to develop an innovative research program in Theoretical/Computational Neuroscience. The successful candidate will join a thriving theoretical/computational neuroscience community at Washington University, including the new Center for Theoretical and Computational Neuroscience. In addition, the Department also has world-class research strengths in systems, circuits and behavior, cellular and molecular neuroscience using a variety of animal models including worms, flies, zebrafish, rodents and non-human primates. The Department’s focus on fundamental neuroscience, outstanding research support facilities, and the depth, breadth and collegiality of our culture provide an exceptional environment to launch your independent research program.
“Brain theory, what is it or what should it be?”
n the neurosciences the need for some 'overarching' theory is sometimes expressed, but it is not always obvious what is meant by this. One can perhaps agree that in modern science observation and experimentation is normally complemented by 'theory', i.e. the development of theoretical concepts that help guiding and evaluating experiments and measurements. A deeper discussion of 'brain theory' will require the clarification of some further distictions, in particular: theory vs. model and brain research (and its theory) vs. neuroscience. Other questions are: Does a theory require mathematics? Or even differential equations? Today it is often taken for granted that the whole universe including everything in it, for example humans, animals, and plants, can be adequately treated by physics and therefore theoretical physics is the overarching theory. Even if this is the case, it has turned out that in some particular parts of physics (the historical example is thermodynamics) it may be useful to simplify the theory by introducing additional theoretical concepts that can in principle be 'reduced' to more complex descriptions on the 'microscopic' level of basic physical particals and forces. In this sense, brain theory may be regarded as part of theoretical neuroscience, which is inside biophysics and therefore inside physics, or theoretical physics. Still, in neuroscience and brain research, additional concepts are typically used to describe results and help guiding experimentation that are 'outside' physics, beginning with neurons and synapses, names of brain parts and areas, up to concepts like 'learning', 'motivation', 'attention'. Certainly, we do not yet have one theory that includes all these concepts. So 'brain theory' is still in a 'pre-newtonian' state. However, it may still be useful to understand in general the relations between a larger theory and its 'parts', or between microscopic and macroscopic theories, or between theories at different 'levels' of description. This is what I plan to do.
Brain network communication: concepts, models and applications
Understanding communication and information processing in nervous systems is a central goal of neuroscience. Over the past two decades, advances in connectomics and network neuroscience have opened new avenues for investigating polysynaptic communication in complex brain networks. Recent work has brought into question the mainstay assumption that connectome signalling occurs exclusively via shortest paths, resulting in a sprawling constellation of alternative network communication models. This Review surveys the latest developments in models of brain network communication. We begin by drawing a conceptual link between the mathematics of graph theory and biological aspects of neural signalling such as transmission delays and metabolic cost. We organize key network communication models and measures into a taxonomy, aimed at helping researchers navigate the growing number of concepts and methods in the literature. The taxonomy highlights the pros, cons and interpretations of different conceptualizations of connectome signalling. We showcase the utility of network communication models as a flexible, interpretable and tractable framework to study brain function by reviewing prominent applications in basic, cognitive and clinical neurosciences. Finally, we provide recommendations to guide the future development, application and validation of network communication models.
How Children Design by Analogy: The Role of Spatial Thinking
Analogical reasoning is a common reasoning tool for learning and problem-solving. Existing research has extensively studied children’s reasoning when comparing, or choosing from ready-made analogies. Relatively less is known about how children come up with analogies in authentic learning environments. Design education provides a suitable context to investigate how children generate analogies for creative learning purposes. Meanwhile, the frequent use of visual analogies in design provides an additional opportunity to understand the role of spatial reasoning in design-by-analogy. Spatial reasoning is one of the most studied human cognitive factors and is critical to the learning of science, technology, engineering, arts, and mathematics (STEAM). There is growing interest in exploring the interplay between analogical reasoning and spatial reasoning. In this talk, I will share qualitative findings from a case study, where a class of 11-to-12-year-olds in the Netherlands participated in a biomimicry design project. These findings illustrate (1) practical ways to support children’s analogical reasoning in the ideation process and (2) the potential role of spatial reasoning as seen in children mapping form-function relationships in nature analogically and adaptively to those in human designs.
Cognitive supports for analogical reasoning in rational number understanding
In cognitive development, learning more than the input provides is a central challenge. This challenge is especially evident in learning the meaning of numbers. Integers – and the quantities they denote – are potentially infinite, as are the fractional values between every integer. Yet children’s experiences of numbers are necessarily finite. Analogy is a powerful learning mechanism for children to learn novel, abstract concepts from only limited input. However, retrieving proper analogy requires cognitive supports. In this talk, I seek to propose and examine number lines as a mathematical schema of the number system to facilitate both the development of rational number understanding and analogical reasoning. To examine these hypotheses, I will present a series of educational intervention studies with third-to-fifth graders. Results showed that a short, unsupervised intervention of spatial alignment between integers and fractions on number lines produced broad and durable gains in fractional magnitudes. Additionally, training on conceptual knowledge of fractions – that fractions denote magnitude and can be placed on number lines – facilitates explicit analogical reasoning. Together, these studies indicate that analogies can play an important role in rational number learning with the help of number lines as schemas. These studies shed light on helpful practices in STEM education curricula and instructions.
Analogical inference in mathematics: from epistemology to the classroom (and back)
In this presentation, we will discuss adaptations of historical examples of mathematical research to bring out some of the intuitive judgments that accompany the working practice of mathematicians when reasoning by analogy. The main epistemological claim that we will aim to illustrate is that a central part of mathematical training consists in developing a quasi-perceptual capacity to distinguish superficial from deep analogies. We think of this capacity as an instance of Hadamard’s (1954) discriminating faculty of the mathematical mind, whereby one is led to distinguish between mere “hookings” (77) and “relay-results” (80): on the one hand, suggestions or ‘hints’, useful to raise questions but not to back up conjectures; on the other, more significant discoveries, which can be used as an evidentiary source in further mathematical inquiry. In the second part of the presentation, we will present some recent applications of this epistemological framework to mathematics education projects for middle and high schools in Italy.
Maths, AI and Neuroscience Meeting Stockholm
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent.
The impact of analogical learning approaches on mathematics education
Learning by Analogy in Mathematics
Analogies between old and new concepts are common during classroom instruction. While previous studies of transfer focus on how features of initial learning guide later transfer to new problem solving, less is known about how to best support analogical transfer from previous learning while children are engaged in new learning episodes. Such research may have important implications for teaching and learning in mathematics, which often includes analogies between old and new information. Some existing research promotes supporting learners' explicit connections across old and new information within an analogy. In this talk, I will present evidence that instructors can invite implicit analogical reasoning through warm-up activities designed to activate relevant prior knowledge. Warm-up activities "close the transfer space" between old and new learning without additional direct instruction.
A Framework for a Conscious AI: Viewing Consciousness through a Theoretical Computer Science Lens
We examine consciousness from the perspective of theoretical computer science (TCS), a branch of mathematics concerned with understanding the underlying principles of computation and complexity, including the implications and surprising consequences of resource limitations. We propose a formal TCS model, the Conscious Turing Machine (CTM). The CTM is influenced by Alan Turing's simple yet powerful model of computation, the Turing machine (TM), and by the global workspace theory (GWT) of consciousness originated by cognitive neuroscientist Bernard Baars and further developed by him, Stanislas Dehaene, Jean-Pierre Changeux, George Mashour, and others. However, the CTM is not a standard Turing Machine. It’s not the input-output map that gives the CTM its feeling of consciousness, but what’s under the hood. Nor is the CTM a standard GW model. In addition to its architecture, what gives the CTM its feeling of consciousness is its predictive dynamics (cycles of prediction, feedback and learning), its internal multi-modal language Brainish, and certain special Long Term Memory (LTM) processors, including its Inner Speech and Model of the World processors. Phenomena generally associated with consciousness, such as blindsight, inattentional blindness, change blindness, dream creation, and free will, are considered. Explanations derived from the model draw confirmation from consistencies at a high level, well above the level of neurons, with the cognitive neuroscience literature. Reference. L. Blum and M. Blum, "A theory of consciousness from a theoretical computer science perspective: Insights from the Conscious Turing Machine," PNAS, vol. 119, no. 21, 24 May 2022. https://www.pnas.org/doi/epdf/10.1073/pnas.2115934119
How Children Discover Mathematical Structure through Relational Mapping
A core question in human development is how we bring meaning to conventional symbols. This question is deeply connected to understanding how children learn mathematics—a symbol system with unique vocabularies, syntaxes, and written forms. In this talk, I will present findings from a program of research focused on children’s acquisition of place value symbols (i.e., multidigit number meanings). The base-10 symbol system presents a variety of obstacles to children, particularly in English. Children who cannot overcome these obstacles face years of struggle as they progress through the mathematics curriculum of the upper elementary and middle school grades. Through a combination of longitudinal, cross-sectional, and pretest-training-posttest approaches, I aim to illuminate relational learning mechanisms by which children sometimes succeed in mastering the place value system, as well as instructional techniques we might use to help those who do not.
It’s not over our heads: Why human language needs a body
n the ‘orthodox’ view, cognition has been seen as manipulation of symbolic, mental representations, separate from the body. This dualist Cartesian approach characterised much of twentieth-century thought and is still taken for granted by many people today. Language, too, has for a long time been treated across scientific domains as a system operating largely independently from perception, action, and the body (articulatory-perceptual organs notwithstanding). This could lead one into believing that to emulate linguistic behaviour, it would suffice to develop ‘software’ operating on abstract representations that would work on any computational machine. Yet the brain is not the sole problem-solving resource we have at our disposal. The disembodied picture is inaccurate for numerous reasons, which will be presented addressing the issue of the indissoluble link between cognition, language, body, and environment in understanding and learning. The talk will conclude with implications and suggestions for pedagogy, relevant for disciplines as diverse as instruction in language, mathematics, and sports.
Population coding in the cerebellum: a machine learning perspective
The cerebellum resembles a feedforward, three-layer network of neurons in which the “hidden layer” consists of Purkinje cells (P-cells) and the output layer consists of deep cerebellar nucleus (DCN) neurons. In this analogy, the output of each DCN neuron is a prediction that is compared with the actual observation, resulting in an error signal that originates in the inferior olive. Efficient learning requires that the error signal reach the DCN neurons, as well as the P-cells that project onto them. However, this basic rule of learning is violated in the cerebellum: the olivary projections to the DCN are weak, particularly in adulthood. Instead, an extraordinarily strong signal is sent from the olive to the P-cells, producing complex spikes. Curiously, P-cells are grouped into small populations that converge onto single DCN neurons. Why are the P-cells organized in this way, and what is the membership criterion of each population? Here, I apply elementary mathematics from machine learning and consider the fact that P-cells that form a population exhibit a special property: they can synchronize their complex spikes, which in turn suppress activity of DCN neuron they project to. Thus complex spikes cannot only act as a teaching signal for a P-cell, but through complex spike synchrony, a P-cell population may act as a surrogate teacher for the DCN neuron that produced the erroneous output. It appears that grouping of P-cells into small populations that share a preference for error satisfies a critical requirement of efficient learning: providing error information to the output layer neuron (DCN) that was responsible for the error, as well as the hidden layer neurons (P-cells) that contributed to it. This population coding may account for several remarkable features of behavior during learning, including multiple timescales, protection from erasure, and spontaneous recovery of memory.
Neural correlates of temporal processing in humans
Estimating intervals is essential for adaptive behavior and decision-making. Although several theoretical models have been proposed to explain how the brain keeps track of time, there is still no evidence toward a single one. It is often hard to compare different models due to their overlap in behavioral predictions. For this reason, several studies have looked for neural signatures of temporal processing using methods such as electrophysiological recordings (EEG). However, for this strategy to work, it is essential to have consistent EEG markers of temporal processing. In this talk, I'll present results from several studies investigating how temporal information is encoded in the EEG signal. Specifically, across different experiments, we have investigated whether different neural signatures of temporal processing (such as the CNV, the LPC, and early ERPs): 1. Depend on the task to be executed (whether or not it is a temporal task or different types of temporal tasks); 2. Are encoding the physical duration of an interval or how much longer/shorter an interval is relative to a reference. Lastly, I will discuss how these results are consistent with recent proposals that approximate temporal processing with decisional models.
Maths, AI and Neuroscience meeting
To understand brain function and develop artificial general intelligence it has become abundantly clear that there should be a close interaction among Neuroscience, machine learning and mathematics. There is a general hope that understanding the brain function will provide us with more powerful machine learning algorithms. On the other hand advances in machine learning are now providing the much needed tools to not only analyse brain activity data but also to design better experiments to expose brain function. Both neuroscience and machine learning explicitly or implicitly deal with high dimensional data and systems. Mathematics can provide powerful new tools to understand and quantify the dynamics of biological and artificial systems as they generate behavior that may be perceived as intelligent. In this meeting we bring together experts from Mathematics, Artificial Intelligence and Neuroscience for a three day long hybrid meeting. We will have talks on mathematical tools in particular Topology to understand high dimensional data, explainable AI, how AI can help neuroscience and to what extent the brain may be using algorithms similar to the ones used in modern machine learning. Finally we will wrap up with a discussion on some aspects of neural hardware that may not have been considered in machine learning.
When and (maybe) why do high-dimensional neural networks produce low-dimensional dynamics?
There is an avalanche of new data on activity in neural networks and the biological brain, revealing the collective dynamics of vast numbers of neurons. In principle, these collective dynamics can be of almost arbitrarily high dimension, with many independent degrees of freedom — and this may reflect powerful capacities for general computing or information. In practice, neural datasets reveal a range of outcomes, including collective dynamics of much lower dimension — and this may reflect other desiderata for neural codes. For what networks does each case occur? We begin by exploring bottom-up mechanistic ideas that link tractable statistical properties of network connectivity with the dimension of the activity that they produce. We then cover “top-down” ideas that describe how features of connectivity and dynamics that impact dimension arise as networks learn to perform fundamental computational tasks.
3 Reasons Why You Should Care About Category Theory
Category theory is a branch of mathematics which have been used to organize various regions of mathematics and related sciences from a radical “relation-first” point of view. Why consciousness researchers should care about category theory? " "There are (at least) 3 reasons:" "1 Everything is relational" "2 Everything is relation" "3 Relation is everything" "In this talk we explain the reasons above more concretely and introduce the ideas to utilize basic concepts in category theory for consciousness studies.
Physical Computation in Insect Swarms
Our world is full of living creatures that must share information to survive and reproduce. As humans, we easily forget how hard it is to communicate within natural environments. So how do organisms solve this challenge, using only natural resources? Ideas from computer science, physics and mathematics, such as energetic cost, compression, and detectability, define universal criteria that almost all communication systems must meet. We use insect swarms as a model system for identifying how organisms harness the dynamics of communication signals, perform spatiotemporal integration of these signals, and propagate those signals to neighboring organisms. In this talk I will focus on two types of communication in insect swarms: visual communication, in which fireflies communicate over long distances using light signals, and chemical communication, in which bees serve as signal amplifiers to propagate pheromone-based information about the queen’s location.
The quest for the cortical algorithm
The cortical algorithm hypothesis states that there is one common computational framework to solve diverse cognitive problems such as vision, voice recognition and motion control. In my talk, I propose a strategy to guide the search for this algorithm and I present a few ideas on how some of its components might look like. I'll explain why a highly interdisciplinary approach is needed from neuroscience, computer science, mathematics and physics to make further progress in this important question.
Comparing Multiple Strategies to Improve Mathematics Learning and Teaching
Comparison is a powerful learning process that improves learning in many domains. For over 10 years, my colleagues and I have researched how we can use comparison to support better learning of school mathematics within classroom settings. In 5 short-term experimental, classroom-based studies, we evaluated comparison of solution methods for supporting mathematics knowledge and tested whether prior knowledge impacted effectiveness. We next developed supplemental Algebra I curriculum and professional development for teachers to integrate Comparison and Explanation of Multiple Strategies (CEMS) in their classrooms and tested the promise of the approach when implemented by teachers in two studies. Benefits and challenges emerged in these studies. I will conclude with evidence-based guidelines for effectively supporting comparison and explanation in the classroom. Overall, this program of research illustrates how cognitive science research can guide the design of effective educational materials as well as challenges that occur when bridging from cognitive science research to classroom instruction.
Dr Lindsay reads from "Models of the Mind : How Physics, Engineering and Mathematics Shaped Our Understanding of the Brain" 📖
Though the term has many definitions, computational neuroscience is mainly about applying mathematics to the study of the brain. The brain—a jumble of all different kinds of neurons interconnected in countless ways that somehow produce consciousness—has been described as “the most complex object in the known universe”. Physicists for centuries have turned to mathematics to properly explain some of the most seemingly simple processes in the universe—how objects fall, how water flows, how the planets move. Equations have proved crucial in these endeavors because they capture relationships and make precise predictions possible. How could we expect to understand the most complex object in the universe without turning to mathematics? — The answer is we can’t, and that is why I wrote this book. While I’ve been studying and working in the field for over a decade, most people I encounter have no idea what “computational neuroscience” is or that it even exists. Yet a desire to understand how the brain works is a common and very human interest. I wrote this book to let people in on the ways in which the brain will ultimately be understood: through mathematical and computational theories. — At the same time, I know that both mathematics and brain science are on their own intimidating topics to the average reader and may seem downright prohibitory when put together. That is why I’ve avoided (many) equations in the book and focused instead on the driving reasons why scientists have turned to mathematical modeling, what these models have taught us about the brain, and how some surprising interactions between biologists, physicists, mathematicians, and engineers over centuries have laid the groundwork for the future of neuroscience. — Each chapter of Models of the Mind covers a separate topic in neuroscience, starting from individual neurons themselves and building up to the different populations of neurons and brain regions that support memory, vision, movement and more. These chapters document the history of how mathematics has woven its way into biology and the exciting advances this collaboration has in store.
Structure-mapping in Human Learning
Across species, humans are uniquely able to acquire deep relational systems of the kind needed for mathematics, science, and human language. Analogical comparison processes are a major contributor to this ability. Analogical comparison engages a structure-mapping process (Gentner, 1983) that fosters learning in at least three ways: first, it highlights common relational systems and thereby promotes abstraction; second, it promotes inferences from known situations to less familiar situations; and, third, it reveals potentially important differences between examples. In short, structure-mapping is a domain-general learning process by which abstract, portable knowledge can arise from experience. It is operative from early infancy on, and is critical to the rapid learning we see in human children. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning. Although structure-mapping processes are present pre-linguistically, their scope is greatly amplified by language. Analogical processes are instrumental in learning relational language, and the reverse is also true: relational language acts to preserve relational abstractions and render them accessible for future learning and reasoning.
One Instructional Sequence Fits all? A Conceptual Analysis of the Applicability of Concreteness Fading
According to the concreteness fading approach, instruction should start with concrete representations and progress stepwise to representations that are more idealized. Various researchers have suggested that concreteness fading is a broadly applicable instructional approach. In this talk, we conceptually analyze examples of concreteness fading in mathematics and various science domains. In this analysis, we draw on theories of analogical and relational reasoning and on the literature about learning with multiple representations. Furthermore, we report on an experimental study in which we employed concreteness fading in advanced physics education. The results of the conceptual analysis and the experimental study indicate that concreteness fading may not be as generalizable as has been suggested. The reasons for this limited generalizability are twofold. First, the types of representations and the relations between them differ across different domains. Second, the instructional goals between domains and the subsequent roles of the representations vary.
Cross Domain Generalisation in Humans and Machines
Recent advances in deep learning have produced models that far outstrip human performance in a number of domains. However, where machine learning approaches still fall far short of human-level performance is in the capacity to transfer knowledge across domains. While a human learner will happily apply knowledge acquired in one domain (e.g., mathematics) to a different domain (e.g., cooking; a vinaigrette is really just a ratio between edible fat and acid), machine learning models still struggle profoundly at such tasks. I will present a case that human intelligence might be (at least partially) usefully characterised by our ability to transfer knowledge widely, and a framework that we have developed for learning representations that support such transfer. The model is compared to current machine learning approaches.
European University for Brain and Technology Virtual Opening
The European University for Brain and Technology, NeurotechEU, is opening its doors on the 16th of December. From health & healthcare to learning & education, Neuroscience has a key role in addressing some of the most pressing challenges that we face in Europe today. Whether the challenge is the translation of fundamental research to advance the state of the art in prevention, diagnosis or treatment of brain disorders or explaining the complex interactions between the brain, individuals and their environments to design novel practices in cities, schools, hospitals, or companies, brain research is already providing solutions for society at large. There has never been a branch of study that is as inter- and multi-disciplinary as Neuroscience. From the humanities, social sciences and law to natural sciences, engineering and mathematics all traditional disciplines in modern universities have an interest in brain and behaviour as a subject matter. Neuroscience has a great promise to become an applied science, to provide brain-centred or brain-inspired solutions that could benefit the society and kindle a new economy in Europe. The European University of Brain and Technology (NeurotechEU) aims to be the backbone of this new vision by bringing together eight leading universities, 250+ partner research institutions, companies, societal stakeholders, cities, and non-governmental organizations to shape education and training for all segments of society and in all regions of Europe. We will educate students across all levels (bachelor’s, master’s, doctoral as well as life-long learners) and train the next generation multidisciplinary scientists, scholars and graduates, provide them direct access to cutting-edge infrastructure for fundamental, translational and applied research to help Europe address this unmet challenge.
Analogies, Games and the Learning of Mathematics
Research on analogical processing and reasoning has provided strong evidence that the use of adequate educational analogies has strong and positive effects on the learning of mathematics. In this talk I will show some experimental results suggesting that analogies based on spatial representations might be particularly effective to improve mathematics learning. Since fostering mathematics learning also involves addressing psychosocial factors such as the development of mathematical anxiety, providing social incentives to learn, and fostering engagement and motivation, I will argue that one area to explore with great potential to improve math learning is applying analogical research in the development of learning games aimed to improve math learning. Finally, I will show some early prototypes of an educational project devoted to developing games designed to foster the learning of early mathematics in kindergarten children.
Abstraction and Analogy in Natural and Artificial Intelligence
Learning by analogy is a powerful tool children’s developmental repertoire, as well as in educational contexts such as mathematics, where the key knowledge base involves building flexible schemas. However, noticing and learning from analogies develops over time and is cognitively resource intensive. I review studies that provide insight into the relationship between mechanisms driving children’s developing analogy skills, highlighting environmental inputs (parent talk and prior experiences priming attention to relations) and neuro-cognitive factors (Executive Functions and brain injury). I then note implications for mathematics learning, reviewing experimental findings that show analogy can improve learning, but also that both individual differences in EFs and environmental factors that reduce available EFs such as performance pressure can predict student learning.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
MidsummerBrains - computational neuroscience from my point of view
Computational neuroscience is a highly interdisciplinary field ranging from mathematics, physics and engineering to biology, medicine and psychology. Interdisciplinary collaborations have resulted in many groundbreaking innovations both in the research and application. The basis for successful collaborations is the ability to communicate across disciplines: What projects are the others working on? Which techniques and methods are they using? How is data collected, used and stored? In this webinar series, several experts describe their view on computational neuroscience in theory and application, and share experiences they had with interdisciplinary projects. This webinar is open for all interested students and researchers. If you are interested to participate live, please send a short message to smartstart@fz-juelich.de Please note, these lectures will be recorded for subsequent publishing as online lecture material.
Relational Reasoning in Curricular Knowledge Components
It is a truth universally acknowledged that relational reasoning is important for learning in Science, Technology, Engineering, and Mathematics (STEM) disciplines. However, much research on relational reasoning uses examples unrelated to STEM concepts (understandably, to control for prior knowledge in many cases). In this talk I will discuss how real STEM concepts can be profitably used in relational reasoning research, using fraction concepts in mathematics as an example.
Parallels between Intuitionistic Mathematics and Neurophenomenology
Neuromatch 5
mathematics coverage
41 items