Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/99999/fk4932973r
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorKornhauser, Alain
dc.contributor.authorHervieux-Moore, Zachary Thomas John
dc.contributor.otherOperations Research and Financial Engineering Department
dc.date.accessioned2021-10-04T13:27:24Z-
dc.date.available2021-10-04T13:27:24Z-
dc.date.created2021-01-01
dc.date.issued2021
dc.identifier.urihttp://arks.princeton.edu/ark:/99999/fk4932973r-
dc.description.abstractWhen developing reinforcement learning algorithms, the main issues are dealing withlarge state spaces and action spaces. For the most part, the state space complexity problem was solved with the advent of AlphaZero. AlphaZero is able to deal with unfathomably large state spaces by using a combination of neural networks and Monte Carlo tree search (MCTS). However, dealing with large action spaces remain an active area of research. We generalize the AlphaZero algorithm by introducing the GAIL framework andtest a variety of alterations. We find that using Thompson Sampling as a selection procedure during the MCTS could potentially improve upon AlphaZero in two-player zero-sum games. However, AlphaZero is extremely competitive with all variations. We then show the strength of GAIL by applying it to the game of Scrabble whichAlphaZero cannot be applied to due to its extremely large action space. Furthermore, GAIL coupled with the Upper Confidence Bound selection procedure and information set MCTS proves to be state of the art in the game of Scrabble. This also establishes that using information set MCTS can be used with a neural network value estimator in reinforcement learning. Finally, we extend these results to the continuous action space domain by developingROAR. A novel algorithm that drastically lowers the action space complexity by making a finite number of action recommendations based on state context and historical performance via reinforcement learning. We end by successfully training it in a nontrivial robot problem.
dc.format.mimetypeapplication/pdf
dc.language.isoen
dc.publisherPrinceton, NJ : Princeton University
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu>catalog.princeton.edu</a>
dc.subjectaction space
dc.subjectAlphaZero
dc.subjectGAIL
dc.subjectmonte carlo tree search
dc.subjectreinforcement learing
dc.subjectRORY
dc.subject.classificationComputer science
dc.titleModern Reinforcement Learning Techniques to Deal with Large Action Spaces
dc.typeAcademic dissertations (Ph.D.)
pu.date.classyear2021
pu.departmentOperations Research and Financial Engineering
Appears in Collections:Operations Research and Financial Engineering

Files in This Item:
File SizeFormat 
HervieuxMoore_princeton_0181D_13884.pdf6.51 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.