Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/99999/fk4bz7nf10
Title: Approximate Bayesian methods for optimal neural coding and decision-making
Authors: Morais, Michael J
Advisors: Pillow, Jonathan W
Contributors: Neuroscience Department
Keywords: approximate inference
Bayesian statistics
decision-making
efficient coding
neural coding
perception
Subjects: Neurosciences
Statistics
Issue Date: 2021
Publisher: Princeton, NJ : Princeton University
Abstract: One fundamental goal of theoretical neuroscience is to understand the normative principles governing the functional organization of neural circuits, and, in turn, to what extent they can be considered optimal. Calling neural representations of information in the brain ``optimal'' implies a multifarious equilibrium that balances robustness against flexibility, completeness against relevance, and so on, but it need only imply a solution to some optimization program. The exact forms of these programs varies with the modeling goals, neural circuits, tasks, or even animals under investigation. With this dissertation, we explore how we can define neural codes as optimal when they generate optimal behavior -- an easy principle to state, but a hard one to implement. Such a principle would bridge a gap between classical hypotheses of optimal neural coding, efficient coding and the Bayesian brain, with a common unified theory. In the first study, we analyzed neural population activity in V1 while monkeys performed a visual detection task, and found that a majority of the total choice-related variability is already present in V1 population activity. Such a prominent contribution of non-stimulus activity in classically sensory regions cannot be incorporated into existing models of neural coding, and demands models that can jointly optimize coding and decision-making within a single neural population. In the second study, we derived power-law efficient codes, a natural generalization of classical efficient codes, and show they are sufficient to replicate and explain a diverse set of psychophysical results. This broader family can maximize mutual information or minimize error of perceptual decisions, suggesting that psychophysical phenomena used to validate normative models could be more general features of perceptual systems than previously appreciated. In the third study, we translated the problem of joint model learning and decision-making into Bayesian machine learning, and extended a family of methods for decision-aware approximate inference to include a novel algorithm that we called loss-calibrated expectation propagation. How this problem can be solved by a non-biophysical system could be a constructive reference point for future studies into joint coding and decision-making, and the normative principles that drive decision-related variability in optimal sensory neural codes.
URI: http://arks.princeton.edu/ark:/99999/fk4bz7nf10
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Neuroscience

Files in This Item:
File SizeFormat 
Morais_princeton_0181D_13681.pdf9.17 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.