Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01th83m199d
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorArora, Sanjeev-
dc.contributor.authorMa, Tengyu-
dc.contributor.otherComputer Science Department-
dc.date.accessioned2017-12-12T19:17:22Z-
dc.date.available2017-12-12T19:17:22Z-
dc.date.issued2017-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01th83m199d-
dc.description.abstractNon-convex optimization is ubiquitous in modern machine learning: recent breakthroughs in deep learning require optimizing non-convex training objective functions; problems that admit accurate convex relaxation can often be solved more efficiently with non-convex formulations. However, the theoretical understanding of non-convex optimization remained rather limited. Can we extend the algorithmic frontier by efficiently optimizing a family of interesting non-convex functions? Can we successfully apply non-convex optimization to machine learning problems with provable guarantees? How do we interpret the complicated models in machine learning that demand non-convex optimizers? Towards addressing these questions, in this thesis, we theoretically studied various machine learning models including sparse coding, topic models, and matrix completion, linear dynamical systems, and word embeddings. We first consider how to find a coarse solution to serve as a good starting point for local improvement algorithms such as stochastic gradient descent. We propose efficient methods for sparse coding and topic inference with better provable guarantees. Second, we propose a framework for analyzing local improvement algorithms that start from a course solution. We apply it successfully to the sparse coding problem. Then, we consider a family of non-convex functions satisfying that all local minima are also global (and some additional regularity property). Such functions can be optimized by local improvement algorithms efficiently from a random or arbitrary starting point. The challenge that we address here, in turn, becomes proving that an objective function belongs to this class. We establish such results for the natural learning objectives of matrix completion and linear dynamical systems. Finally, we make steps towards interpreting the non-linear models that require non-convex training algorithms. We reflect on the principles of word embeddings in natural language processing. We give a generative model for the texts, using which we explain why different non-convex formulations such as word2vec and GloVe can learn similar word embeddings with the surprising performance --- analogous words have embeddings with similar differences.-
dc.language.isoen-
dc.publisherPrinceton, NJ : Princeton University-
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a>-
dc.subjectmachine learning-
dc.subjectnon-convex optimization-
dc.subject.classificationArtificial intelligence-
dc.subject.classificationComputer science-
dc.titleNon-convex Optimization for Machine Learning: Design, Analysis, and Understanding-
dc.typeAcademic dissertations (Ph.D.)-
pu.projectgrantnumber690-2143-
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Ma_princeton_0181D_12361.pdf2.45 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.