Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/99999/fk4fn2pn1w
Title: Overcoming Sampling and Exploration Challenges in Deep Reinforcement Learning
Authors: Simmons-Edler, Riley
Advisors: Seung, Sebastian H
Contributors: Computer Science Department
Keywords: Deep Learning
Deep Reinforcement Learning
Machine Learning
Reinforcement Learning
Subjects: Artificial intelligence
Issue Date: 2022
Publisher: Princeton, NJ : Princeton University
Abstract: The combination of deep neural networks with reinforcement learning (RL)shows great promise for solving otherwise intractable learning tasks. However, practical demonstrations of deep reinforcement learning remain scarce. The challenges in using deep RL for a given task can be grouped into two categories, broadly “What to learn from experience?” and “What experience to learn from?” In this thesis, I describe work to address the second category. Specifically, prob- lems of sampling actions, states, and trajectories which contain information rel- evant to learning tasks. I examine this challenge at three levels of algorithm de- sign and task complexity, from algorithmic components to hybrid combination algorithms that break common RL conventions. In the first chapter, I describe work on stable and efficient sampling of ac- tions that optimize a Q-function of continuous-valued actions. By combining a sample-based optimizer with neural network approximation, it is possible to obtain stability in training, computational efficiency, and precise inference. In the second chapter, I describe work on reward-aware exploration, the dis- covery of desirable behaviors where common sampling methods are insufficient. A teacher “exploration” agent discovers states and trajectories which maximize the amount a student “exploitation” agent learns on those experiences, and can enable the student agent to solve hard tasks which are otherwise impossible. In the third chapter, I describe work combining reinforcement learning with heuristic search, for use in task domains where the transition model is known, but where the combinatorics of the state space are intractable for traditional search. By combining deep Q-learning with a best-first tree search algorithm, it is possible to find solutions to program synthesis problems with dramatically fewer samples than with common search algorithms or RL alone. Lastly, I conclude with a summary of the major takeaways of this work, and discuss extensions and future directions for efficient sampling in RL.
URI: http://arks.princeton.edu/ark:/99999/fk4fn2pn1w
Alternate format: The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu
Type of Material: Academic dissertations (Ph.D.)
Language: en
Appears in Collections:Computer Science

Files in This Item:
File SizeFormat 
SimmonsEdler_princeton_0181D_14106.pdf6.17 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.