Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01tt44pq779
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Witten, Ilana B | - |
dc.contributor.author | Parker, Nathan Francis | - |
dc.contributor.other | Neuroscience Department | - |
dc.date.accessioned | 2020-07-13T02:01:23Z | - |
dc.date.available | 2020-07-13T02:01:23Z | - |
dc.date.issued | 2019 | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/dsp01tt44pq779 | - |
dc.description.abstract | The ability for an animal to learn to associate a given action with its outcome and subsequently choose the action most likely to result in reward is essential to the fitness of a wide range of species, including humans. This process, known as reinforcement learning, has been the subject of extensive experimental and theoretical work and is implicated in a wide range of clinical conditions including depression, obsessive compulsive disorder, Tourette's syndrome and substance abuse. While this work has dramatically increased our understanding of how the brain is able to perform this process of reinforcement learning, several outstanding questions remain. One such question is how the brain performs both the pairing of actions with their respective outcomes as well as the generation of appropriate actions once they have been learned. In the second chapter of this thesis, I present data demonstrating that information relevant to these two processes are differentially supplied by two distinct neural circuits, each of which are defined by where in the brain they send their projections. Another unanswered question is how the brain is able to associate an action with an outcome when the two are separated in time. In the third chapter of this thesis, I present data showing that a population of neurons encode a short term memory of the performed action for a period of time extending into the outcome. This activity may be used to bridge this gap in time between action and outcome, allowing successful pairing of the two despite their temporal separation. Together, these data substantially expand our understanding of how the brain is able to perform key computations involved in reinforcement learning. A deeper understanding of these neural underpinnings is essential to more effectively treating the myriad of conditions associated with this circuit. | - |
dc.language.iso | en | - |
dc.publisher | Princeton, NJ : Princeton University | - |
dc.relation.isformatof | The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a> | - |
dc.subject | Dopamine | - |
dc.subject | Glutamate | - |
dc.subject | Q learning | - |
dc.subject | Reinforcement learning | - |
dc.subject | Reward learning | - |
dc.subject | Striatum | - |
dc.subject.classification | Neurosciences | - |
dc.title | FUNCTIONAL CHARACTERIZATION OF STRIATAL AFFERENT PROJECTIONS IN THE CONTEXT OF REINFORCEMENT LEARNING | - |
dc.type | Academic dissertations (Ph.D.) | - |
Appears in Collections: | Neuroscience |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Parker_princeton_0181D_13082.pdf | 17.27 MB | Adobe PDF | View/Download |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.