Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp012z10wt107
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorLiu, Han-
dc.contributor.authorEisenach, Carson-
dc.contributor.otherOperations Research and Financial Engineering Department-
dc.date.accessioned2019-11-05T16:47:22Z-
dc.date.available2019-11-05T16:47:22Z-
dc.date.issued2019-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp012z10wt107-
dc.description.abstractTraditional problems in statistics and machine learning are relatively well understood -- they often feature low dimensionality, convex loss functions, and independent, identically distributed data. By contrast, many modern learning problems feature high dimensional data, non-convex learning objectives, and data distributions that change during the learning process. Whether the problem of interest is labeled as statistics, machine learning, statistical learning, or reinforcement learning, methods for solving it can be viewed as the stochastic optimization of some objective function. Accordingly, we address the aforementioned challenges via the lens of statistical optimization -- a statistical approach for understanding and solving stochastic optimization. In particular, we focus on deriving new methodology with computational and statistical guarantees for two classes of problems: recovering and performing inference on latent patterns in high-dimensional graphical models, and continuous control over bounded action spaces. In the first part of this dissertation, we consider a class of cluster-based graphical models. We introduce a novel algorithm for variable clustering named FORCE, based on solving a convex relaxation of the K-means criterion, as well as post-dimension reduction inferential procedures. In the second part, we consider the reinforcement learning (RL) setting, where an agent seeks to learn a decision-making policy based on feedback from its environment. We derive a novel class of variance-reduced estimators called Marginal Policy Gradients, and demonstrate both their improved statistical properties and their application to several control tasks.-
dc.language.isoen-
dc.publisherPrinceton, NJ : Princeton University-
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a>-
dc.subject.classificationStatistics-
dc.subject.classificationMental health-
dc.titleModern Optimization for Statistics and Learning-
dc.typeAcademic dissertations (Ph.D.)-
Appears in Collections:Operations Research and Financial Engineering

Files in This Item:
File Description SizeFormat 
Eisenach_princeton_0181D_12945.pdf3.13 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.