Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/99999/fk4m34j53v
Title: Using reinforcement learning to explain psychodynamics
Authors: Dulberg, Zachary
Advisors: Cohen, Jonathan D.
Contributors: Neuroscience Department
Keywords: cognitive science
conflict
grief
machine learning
psychodynamics
reinforcement learning
Subjects: Neurosciences
Psychology
Cognitive psychology
Issue Date: 2024
Publisher: Princeton, NJ : Princeton University
Abstract: In this thesis we consider the use of reinforcement learning (RL) as a formal framework for understanding psychodynamic phenomena. While the field of psychodynamics has influenced a century of theoretical and clinical thinking about psychology, it also faces a number of scientific challenges. We show how RL can provide new explanations for key psychodynamic constructs by grounding them in the principles of reward maximization. The body of the thesis consists of three chapters in which simulations of adaptive agents offer normative accounts of psychodynamic processes. We address the following three phenomena: i) intrapsychic conflict, ii) the dynamics of grief, and iii) the distinct processing of pleasure and pain. We relate these to RL implementations that consider i) modularity, ii) memory replay and iii) reward learning operators, respectively, as central explanatory factors. By providing a computational foundation for psychodynamic concepts, this thesis offers a path toward a more rigorous science of psychological dynamics and their disruption in clinical conditions.
URI: http://arks.princeton.edu/ark:/99999/fk4m34j53v
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Neuroscience

Files in This Item:
File SizeFormat 
Dulberg_princeton_0181D_15350.pdf30.62 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.