Functional Characterization of Early Visual Neurons Using Machine Learning
Doctoral thesis
Date of Examination:2025-09-15
Date of issue:2025-10-02
Advisor:Prof. Dr. Alexander Ecker
Referee:Prof. Dr. Alexander Ecker
Referee:Prof. Dr. Tim Gollisch
Referee:Prof. Dr. Fred Wolf
Referee:Prof. Dr. Fabian Sinz
Referee:Prof. Dr. Emilie, Macé
Referee:Prof. Dr. Viola Priesemann
Files in this item
Name:Thesis-8.pdf
Size:24.2Mb
Format:PDF
Abstract
English
Understanding how early visual neurons represent and process sensory input remains a central challenge in systems neuroscience. Key questions include how these neurons encode visual information, how their receptive fields adapt to changes in stimulus statistics, and how they organize into functional cell types. Predictive models serve as essential tools for studying these phenomena: they link features of the visual stimulus to neural responses and enable the inference of functional cell properties from data. Classical modeling approaches based on linear-nonlinear (LN) frameworks have played a foundational role in estimating receptive fields and characterizing neural input-output functions. And still today, extending these models with interpretable features presents significant opportunities for progress in understanding visual neurons. Concurrently, recent developments in deep learning offer complementary approaches with models showing improved neural response prediction accuracy, but making direct insight more challenging. In this dissertation, we use neural recordings from the retina and primary visual cortex (V1) of primates, salamanders, and mice to leverage both approaches in building and interpreting predictive models that address questions about the receptive field structure and functional properties of early visual system neurons. First, using classical modeling, we demonstrate that retinal ganglion cells (RGCs) in primates systematically adapt their receptive fields when stimulus statistics shift from white noise to natural scenes. We also show that a simple spatial contrast model, based on local mean and variance, can effectively capture nonlinear spatial integration in primate RGCs under naturalistic stimulation. Next, to aid further predictive model development for the retina and V1, we establish benchmarks for both of these visual areas. For the retina, we systematically benchmark LN models against convolutional neural network (CNN) architectures to identify optimal designs and inductive biases for predicting RGC responses across species and visual stimuli. In mouse V1, we establish a standardized large-scale benchmark in the form of a public competition for dynamic stimulus prediction. Finally, by applying interpretability methods to CNN models, we uncover candidate stimuli that drive wide-field inhibitory interactions mediated by amacrine cells (ACs) and develop data-driven, unbiased approaches for identifying neuronal types based on their functional responses. Together, this work advances predictive modeling techniques in visual neuroscience and directly applies them to expand our knowledge about the function of the early visual system.
Keywords: retina; machine learning; deep learning; predictive models of the visual system; primary visual cortex; cell types; wide-field amacrine cells; retinal ganglion cells
