Neurons in the brain region of the entorhinal cortex compute low dimensional structural information about the environment and are ideally suited to abstract knowledge to form learning sets that can be used in other domains. The ability to selectively focus on relevant information is essential for learning. And indeed, learning rewarded cue-associations may elicit the release of neuromodulatory signals facilitating lasting changes to neuron connections. How neuromodulatory signals influence representations and learning in the entorhinal cortex remain elusive.
Recent advances in recording technology now allow simultaneous measurements of neural activity of several thousand neurons from connected brain areas. This opens for unique insights into the neuron population dynamics underlying attention modulation and learning in these brain areas. The complexity of data requires new means to understand them by exploration of simulations in e.g. artificial neural networks. Similarly, recent progress in both AI research and neuroscience open novel synergies for advancing both fields.
A promising path is implementation of deep learning networks to compare with learning in the brain. However, these networks are usually simplistic with no contact to neural networks of the brain. Combining deep insight into the network architecture, dissociation of neural contribution and behavioural outcomes from experiments hold great potential, though yet unexplored, to develop and explore processes in biologically inspired artificial neural networks.
The candidate will work alongside experimentalists and explore data from the brain with advanced methodology including deep learning approaches. Then, artificial neural network models will be extended by integrating experimental aspects such as cell-types and molecular features to address learning scenarios similar to that of the experiments. Together, this opens for a comparison of network structure and dynamics during learning in biological and artificial systems.
This two-pronged approach - combining experimental studies and exploratory modeling - will help us develop a new level of understanding of robust learning mechanisms in these systems and change the way experiments are conducted.
The project includes collaborations with leading groups at Harvard University and the University of California San Diego.
- You must have a master degree in physics, computational neuroscience, or artificial intelligence.
- Documented experience from computational modeling, scientific programming or implementation and studies of neural network systems is an advantage.
Call 2: Project start autumn 2022
This project is in call 2, starting autumn 2022.