Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01qb98mj16v
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorBotvinick, Matthew M-
dc.contributor.authorStachenfeld, Kimberly Lauren-
dc.contributor.otherNeuroscience Department-
dc.date.accessioned2018-06-12T17:42:52Z-
dc.date.available2018-06-12T17:42:52Z-
dc.date.issued2018-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01qb98mj16v-
dc.description.abstractRL has been transformative for neuroscience by providing a normative anchor for interpreting neural and behavioral data. End-to-end RL methods have scored impressive victories with minimal compromises in autonomy, hand-engineering, and generality. The cost of this minimalism in practice is that model-free RL methods are slow to learn and generalize poorly. Humans and animals exhibit substantially improved flexibility and generalize learned information rapidly to new environment by learning invariants of the environment and features of the environment that support fast learning rapid transfer in new environments. An important question for both neuroscience and machine learning is what kind of ``representational objectives'' encourage humans and other animals to encode structure about the world. This can be formalized as ``representation feature learning,'' in which the animal or agent learns to form representations with information potentially relevant to the downstream RL process. We will overview different representational objectives that have received attention in neuroscience and in machine learning. The focus of this overview will be to first highlight conditions under which these seemingly unrelated objectives are actually mathematically equivalent. We will use this to motivate a breakdown of properties of different learned representations that are meaningfully different and can be used to inform contrasting hypotheses for neuroscience. We then use this perspective to motivate our model of the hippocampus. A cognitive map has long been the dominant metaphor for hippocampal function, embracing the idea that place cells encode a geometric representation of space. However, evidence for predictive coding, reward sensitivity, and policy dependence in place cells suggests that the representation is not purely spatial. We approach the problem of understanding hippocampal representations from a reinforcement learning perspective, focusing on what kind of spatial representation is most useful for maximizing future reward. We show that the answer takes the form of a predictive representation. This representation captures many aspects of place cell responses that fall outside the traditional view of a cognitive map. We go on to argue that entorhinal grid cells encode a low-dimensional basis set for the predictive representation, useful for suppressing noise in predictions and extracting multiscale structure for hierarchical planning.-
dc.language.isoen-
dc.publisherPrinceton, NJ : Princeton University-
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a>-
dc.subjectGrid Cell-
dc.subjectHippocampus-
dc.subjectPlace Cell-
dc.subjectReinforcement Learning-
dc.subjectRepresentation Learning-
dc.subject.classificationNeurosciences-
dc.subject.classificationQuantitative psychology-
dc.subject.classificationCognitive psychology-
dc.titleLearning Neural Representations That Support Efficient Reinforcement Learning-
dc.typeAcademic dissertations (Ph.D.)-
pu.projectgrantnumber690-2143-
Appears in Collections:Neuroscience

Files in This Item:
File Description SizeFormat 
Stachenfeld_princeton_0181D_12624.pdf20.03 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.