Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01m613n1193
Title: | Optimal Learning for Nonlinear Parametric Belief Models |
Authors: | He, Xinyu |
Advisors: | Powell, Warren B. |
Contributors: | Electrical Engineering Department |
Keywords: | knowledge gradient nonlinear parametric model optimal learning |
Subjects: | Electrical engineering Operations research |
Issue Date: | 2017 |
Publisher: | Princeton, NJ : Princeton University |
Abstract: | Many real-world optimization problems require making measurements to determine which choice works the best. Such measurements can be noisy and expensive, as might arise in simulations or laboratory experiments. Optimal learning addresses the problem of how to collect information efficiently. In particular, a class of Bayesian policies, known as the Knowledge Gradient (KG), has drawn extensive attention. KG is easy to implement for low-dimensional problems with simple belief models such as look-up table or linear models, but becomes computationally intractable with nonlinear parametric belief models, which arise frequently in multidimensional problems. In this thesis, we study the optimal learning problem with nonlinear parametric belief models. We assume the function to optimize, which could be some performance, utility or output, can be globally or locally described by some parametric model, where the parameters are unknown. Our goal is to identify the optimal choice (where a choice is also called an experimental design or an alternative) after running a limited number of sequential measurements, which are noisy and expensive. We first focus on problems with a global parametric form. We use a sampled set of parameters to solve the computational issue of KG, and introduce a resampling process to update the sampled set adaptively. We also develop an efficient implementation of the KG policy for continuous alternatives. We prove that our algorithm, which works for multidimensional and continuous alternative and parameter spaces, can asymptotically learn both the correct parameter and the optimal alternative. Next we study problems in which a globally accurate model is unavailable, but there exist some parametric models that can well describe the function in local regions. We propose a method to construct global approximations of the function and apply KG to quickly identify the optimal alternative. Experiments on both synthetic and real-world problems show the strong performance of our algorithm. |
URI: | http://arks.princeton.edu/ark:/88435/dsp01m613n1193 |
Alternate format: | The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: catalog.princeton.edu |
Type of Material: | Academic dissertations (Ph.D.) |
Language: | en |
Appears in Collections: | Electrical Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
He_princeton_0181D_12080.pdf | 5.96 MB | Adobe PDF | View/Download |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.