Sensory processing and learning in spiking neural networks
by Lucas Rudelt
Date of Examination:2024-02-13
Date of issue:2025-02-12
Advisor:Prof. Dr. Viola Priesemann
Referee:Prof. Dr. Fred Wolf
Referee:Prof. Dr. Jörg Enderlein
Files in this item
Name:PhD_Thesis___revised.pdf
Size:46.9Mb
Format:PDF
Description:Thesis document
Abstract
English
Organisms are capable of navigating an extremely complex environment with the help of their senses. To do this, they exploit that high dimensional sensory inputs can often be predicted from relatively sparse information, such as the location, orientation and other properties of an object in a visual scene. Learning such sparse, predictive representations does not only drastically reduce the amount of information that has to be processed, but may also lead to the discovery of features of the environment that are key for higher cognitive processing. Although ample empirical evidence indicates that sparse and predictive representations are essential for sensory processing, it remains open how exactly this principle shapes neural spiking representations across stages of sensory processing in the brain. Higher-level representations might be less redundant in time because of an active suppression of predictable information, or more redundant because of an enhanced integration of past information to represent more complex spatio-temporal features. To investigate this in neural data, we developed a measure and an estimation procedure to estimate predictability in single-neuron spiking, which is related to temporal redundancy, as well as an information timescale, which is related to the timescale of temporal integration. In mouse visual cortex, we find that median predictability decreases along an anatomical hierarchy of brain areas, while the median information timescale increases. These results suggest a reduction of temporal redundancy along a sensory processing hierarchy, which coincides with a longer integration of past information in higher sensory areas. In theoretical work, we further address how spiking neural networks can efficiently learn to represent sparse and predictive representations of sensory inputs. Learning with spiking neurons poses severe challenges because the temporally sparse communication with action potentials prohibits the communication of high-fidelity error signals that are required for learning. To this end, we introduce the concept of dendritic error computation, which enables a local computation of error signals at neural dendrites, and thus circumvents the necessity to transmit these signals. We show that dendritic error computation solves key issues of previous learning schemes based on Hebbian-like plasticity in recurrent spiking networks, or distinct populations of error neurons in hierarchical predictive coding. Strikingly, dendritic error computation does not only unify these theories into a coherent picture of predictive coding in cortex, but also makes novel predictions on the subcellular level, such as a tight excitation-inhibition balance on the level of individual dendritic branches, or a voltage-dependence of synaptic plasticity. Finally, we show that mismatch responses in cortex, which have been previously explained through predictive processing with dedicated error neurons, can also arise in neural networks where errors are computed in the dendrites. This provides a new perspective on the role of local dendritic processing for sensory processing and learning in the spiking neural networks of the brain.
Keywords: information theory; neural networks; unsupervised learning; hierarchical information processing; spiking neurons