Temporal Information Integration in Neural Networks

Project funded by Sinergia

Summary

In the last decade ‘deep learning’ (DL), a brain-inspired weak form of artificial intelligence, has revolutionized the field of machine learning by achieving unprecedented performance to solve many real-world tasks for example in image or speech recognition. However, so spectacular these advances are, some major deficits are still omnipresent: deep learning networks have to be trained with huge data sets and their results are usually only spectacular when major effort goes towards solving a very specific task (like winning against the best human player in the game of GO). The ability of deep learning networks to act as generalizable problem solvers is still far behind what the human brain achieves effortlessly. In particular, the power of deep learning networks is still limited when tasks require an integration of spatiotemporally complex data over extended time periods of more than 2 seconds.

The main goal of this project is to gain a fundamental and analytical understanding of (1) how neuronal networks store information over short time periods and (2) how they link information across time to build internal models of complex temporal input data. Our proposal is well timed because recent advances in neuroscience now allow to record and track the activity of large populations of genetically identified neurons deep in the brain of behaving animals during a temporal learning task – this was simply not possible several years ago. To reach the above goal we combine expertise in developing and using cutting edge in vivo calcium imaging techniques to probe neuronal population activity in awake freely behaving animals (group B. Grewe) with expertise in analyzing random networks (group A. Steger). This combination of collaborators allows us to develop network models and hypotheses from observed data that we can subsequently test in vivo.