Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp0141687m322
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorArora, Sanjeev-
dc.contributor.authorKhandeparkar, Hrishikesh-
dc.date.accessioned2019-09-04T17:55:57Z-
dc.date.available2019-09-04T17:55:57Z-
dc.date.created2019-05-14-
dc.date.issued2019-09-04-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp0141687m322-
dc.description.abstractSupervised learning has long been considered an empirically successful and theoretically well motivated paradigm in machine learning which continues to be an active area of research. On the other hand, recent empirical works have successfully used unlabeled data to learn feature representations that are broadly useful in downstream classification tasks. Several of these methods are reminiscent of the well-known word2vec embedding algorithm: leveraging availability of pairs of semantically “similar” data points and “negative samples,” the learner forces the inner product of representations of similar pairs with each other to be higher on average than with negative samples. In contrast with supervised learning, such methods lack a strong theoretical grounding and are thus not as well understood. This paper uses the term {\em contrastive} learning for such algorithms and presents a theoretical framework to analyze them by introducing latent classes and hypothesizing that semantically similar points are sampled from the same latent class. This minimal framework allows us to show provable guarantees on the performance of the learned representations on the average classification task that is comprised of a subset of the same set of latent classes. Our generalization bound also shows that learned representations can reduce labeled sample complexity on downstream tasks. To support the theory we conduct controlled experiments in both the text and image domains using function classes of practical interest. We hope that such theoretical frameworks can, in the future, promote a principled study of unsupervised learning methods.en_US
dc.format.mimetypeapplication/pdf-
dc.language.isoenen_US
dc.titleA Theoretical Analysis of Contrastive Unsupervised Representation Learningen_US
dc.typePrinceton University Senior Theses-
pu.date.classyear2019en_US
pu.departmentComputer Scienceen_US
pu.pdf.coverpageSeniorThesisCoverPage-
pu.contributor.authorid961188911-
Appears in Collections:Computer Science, 1988-2020

Files in This Item:
File Description SizeFormat 
KHANDEPARKAR-HRISHIKESH-THESIS.pdf564.94 kBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.