Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01xp68kk143
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorE, Weinan-
dc.contributor.authorMa, Chao-
dc.contributor.otherMathematics Department-
dc.date.accessioned2020-07-13T03:33:07Z-
dc.date.available2020-07-13T03:33:07Z-
dc.date.issued2020-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp01xp68kk143-
dc.description.abstractIn contrast to its unprecedented practical success across a wide range of fields, the theoretical understanding of the principles behind the success of deep learning has been a troubling and controversial subject. In this dissertation, we build a systematic framework to study the theoretical issues of neural networks, including their approximation and generalization, as well as the issues associated with the optimization algorithms. We take inspirations from traditional numerical analysis. Since neural networks usually have high-dimensional input, the most important consideration in this study is the issue of the curse of dimensionality. First, we focus on the approximation property of neural networks. For typical neural network models, we build approximation theories by identifying the appropriate function spaces formed by all the functions that can be approximated by these models without the curse of dimensionality. Direct and inverse approximation theorems are proven, which imply that a function can be efficiently approximated by a neural network model if and only if it belongs to the corresponding function space. Second, we deal with the generalization issue. We design parameter norms for neural network models that can bound the Rademacher complexity. This allows us to establish a posteriori estimates of the generalization error. Together with results from the first part, we derive a priori estimates with constants that depend on the norm of the target function instead of the trained model. Third, the optimization algorithms are studied. Deep learning often operates in the over-parametrized regime where the global minimizers of the empirical risk are far from being unique. Therefore the first issue addressed in this part is how different optimization algorithms select the global minimizers differently. Next, we analyze the training dynamics of an important linear model: the random feature model. We demonstrate that even though the generalization error for the true global minimizer may be very large, the process of deterioration happens very slowly during training, leaving a long period of time during which the generalization error is small. Finally, we study the gradient descent dynamics of two-layer neural networks. We prove some global and local convergence results by taking a continuous viewpoint.-
dc.language.isoen-
dc.publisherPrinceton, NJ : Princeton University-
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a>-
dc.subjectFunction space-
dc.subjectMachine learning-
dc.subjectNeural network-
dc.subject.classificationApplied mathematics-
dc.titleMathematical Theory of Neural Network Models for Machine Learning-
dc.typeAcademic dissertations (Ph.D.)-
Appears in Collections:Mathematics

Files in This Item:
File Description SizeFormat 
Ma_princeton_0181D_13374.pdf1.76 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.