Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01tq57nt894
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Norman, Kenneth | - |
dc.contributor.author | Fan, Kathy | - |
dc.date.accessioned | 2019-09-04T17:42:14Z | - |
dc.date.available | 2019-09-04T17:42:14Z | - |
dc.date.created | 2019-05-02 | - |
dc.date.issued | 2019-09-04 | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/dsp01tq57nt894 | - |
dc.description.abstract | With the ability to form and reinstate episodic memories, humans can not only remember the past, but also better comprehend new situations that are similar to previous experiences. A memory-augmented neural network (MANN) can likewise benefit from its external memory and perform better in environments where events recur. How do different memory-encoding and retrieval strategies affect this performance? In this project, we use a neurally-inspired MANN to study a task setting that models not only continuous events, but also the condition of having multiple, potentially confusable memories. We ask the architecture to perform state prediction tasks and assess how its accuracy and behavior differ under various memory-encoding and retrieval policies. For memory retrieval, we find that compared to a strategy where the amount read is relatively constant throughout an event, a ``read more later'' policy allows the model to better distinguish between correct and wrong memories. However, reading less near the beginning effectively delays the time at which memory reinstatement begins and thus decreases the total memory benefit obtained by the model. As a result of this trade-off, under our task setting, models that implement the ``read more later'' policy do not necessarily perform better overall. For memory-encoding, we investigate whether an optimal model should store an event as several smaller chunks, or one large chunk. We find that as the chunk size for encoded memories increases, the model demonstrates higher performance accuracy. The underlying reasons can be attributed to stronger memory reinstatement, as well as the model fetching a higher fraction of correct memories. Taken together, our results are consistent with some related works and theories from cognitive psychology, suggesting that going forward, analyzing the way in which our model makes use of its memory buffer could help us generate new hypotheses about the underlying mechanisms of episodic memory in the brain. | en_US |
dc.format.mimetype | application/pdf | - |
dc.language.iso | en | en_US |
dc.title | Learning When to Encode and Retrieve Episodic Memories With Memory-Augmented Neural Networks | en_US |
dc.type | Princeton University Senior Theses | - |
pu.date.classyear | 2019 | en_US |
pu.department | Computer Science | en_US |
pu.pdf.coverpage | SeniorThesisCoverPage | - |
pu.contributor.authorid | 961168105 | - |
Appears in Collections: | Computer Science, 1988-2020 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FAN-KATHY-THESIS.pdf | 1.37 MB | Adobe PDF | Request a copy |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.