Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01fj236506k
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorAdams, Ryan P-
dc.contributor.advisorAdams, Ryan P-
dc.contributor.advisorGriffiths, Tom L-
dc.contributor.advisorAdams, Ryan P-
dc.contributor.authorLi, Michael-
dc.date.accessioned2020-08-12T13:28:07Z-
dc.date.available2020-08-12T13:28:07Z-
dc.date.created2020-05-03-
dc.date.issued2020-08-12-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01fj236506k-
dc.description.abstractHumans have a remarkable ability to generalize, utilizing limited experience to efficiently search over decision spaces. Surprisingly, humans can often outperform state of the art machine learning algorithms across a variety of search tasks. One explanation for this is that humans learn a flexible model of the search space which they exploit to make good decisions. In this thesis, we investigate whether humans can learn the shared structure among a family of functions. We cast this problem through the lens of learning the kernel hyperparameters of a Gaussian Process. We begin with a thorough analysis of human search strategies in a correlated multi-armed bandit task, with the aim of understanding the limitations of a model assuming humans fix their kernel hyperparameters. We find that these models systematically undervalue human search strategies. We then introduce a set of function learning tasks, in which we iteratively reveal function values and collect human predictions, using a kernel learning framework to determine if human participants adapt their predictions to the environmental structure and show evidence of learning the true kernel hyperparameters. We do not find compelling evidence in favor of hyperparameter adaptation. However, we do show that participants learn function-specific structure and can produce function predictions that align closely with Gaussian Processes predictions when supplied with ample data and when tasked with interpolation. We also find that human participants can learn the correct scale of the functions and that participants tend to overestimate smoothness when extrapolating with limited data.en_US
dc.format.mimetypeapplication/pdf-
dc.language.isoenen_US
dc.titleTEXTen_US
dc.titleProbing the Adaptivity of the Human Kernelen_US
dc.titleTEXTen_US
dc.typePrinceton University Senior Theses-
pu.date.classyear2020en_US
pu.departmentComputer Scienceen_US
pu.pdf.coverpageSeniorThesisCoverPage-
pu.contributor.authorid961260576-
Appears in Collections:Computer Science, 1988-2020

Files in This Item:
File Description SizeFormat 
LI-MICHAEL-THESIS.pdf2.44 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.