Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01b2773z61t
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Chen, Yuxin | - |
dc.contributor.advisor | Fan, Jianqing | - |
dc.contributor.author | Ma, Cong | - |
dc.contributor.other | Operations Research and Financial Engineering Department | - |
dc.date.accessioned | 2020-07-13T03:32:40Z | - |
dc.date.available | 2020-07-13T03:32:40Z | - |
dc.date.issued | 2020 | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/dsp01b2773z61t | - |
dc.description.abstract | In recent years, there has been an explosion of interest in designing fast nonconvex optimization algorithms to solve statistical estimation and learning problems. However, in contrast to convex optimization that has become a real pillar of modern engineering, the theoretical foundations of nonconvex optimization are far from satisfactory, especially in terms of its computational and inferential properties. The main focus of this thesis is therefore to develop a fundamental understanding of nonconvex optimization in statistical estimation and inference. Starting with statistical estimation, we have shown that • vanilla gradient descent, in conjunction with spectral initialization, converges linearly for a family of nonconvex statistical learning problems including phase retrieval, matrix completion, and blind deconvolution; • gradient descent — when initialized randomly — implicitly avoids all the saddle points, thus enabling optimal convergence rates and statistical accuracy at once for the phase retrieval problem. In addition to demonstrating the computational advantages of nonconvex optimization, we also uncover an interesting and inherent connection of nonconvex optimization to convex relaxation: they are essentially equivalent for solving the noisy matrix completion problem. This connection allows us to establish the minimax optimality of convex relaxation vis-a`-vis random noise. Going beyond estimation, we further develop a simple procedure to compensate for the bias of the widely used convex and nonconvex estimators for noisy matrix completion. The resulting de-biased estimators admit nearly precise non-asymptotic distributional characterizations, which in turn enable optimal construction of confidence intervals/regions for, say, the missing entries and the low-rank factors. All of this is achieved via an integrated view of statistics and optimization, which we highlight throughout the thesis. | - |
dc.language.iso | en | - |
dc.publisher | Princeton, NJ : Princeton University | - |
dc.relation.isformatof | The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a> | - |
dc.subject.classification | Statistics | - |
dc.subject.classification | Operations research | - |
dc.title | Statistics Meets Nonconvex Optimization: Computational Efficiency and Uncertainty Quantification | - |
dc.type | Academic dissertations (Ph.D.) | - |
Appears in Collections: | Operations Research and Financial Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Ma_princeton_0181D_13339.pdf | 8.04 MB | Adobe PDF | View/Download |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.