Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01jw827f27t
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Powell, Warren B | - |
dc.contributor.author | Cheng, Bolong | - |
dc.contributor.other | Electrical Engineering Department | - |
dc.date.accessioned | 2017-07-17T20:27:44Z | - |
dc.date.available | 2017-07-17T20:27:44Z | - |
dc.date.issued | 2017 | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/dsp01jw827f27t | - |
dc.description.abstract | In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the PJM Interconnect and show that it outperforms the baseline approach used in the industry. | - |
dc.language.iso | en | - |
dc.publisher | Princeton, NJ : Princeton University | - |
dc.relation.isformatof | The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a> | - |
dc.subject | battery optimization | - |
dc.subject | dynamic programming | - |
dc.subject | local approximation | - |
dc.subject | optimal learning | - |
dc.subject | renewable energy | - |
dc.subject | stochastic optimization | - |
dc.subject.classification | Engineering | - |
dc.subject.classification | Energy | - |
dc.title | Local Approximation and Hierarchical Methods for Stochastic Optimization | - |
dc.type | Academic dissertations (Ph.D.) | - |
pu.projectgrantnumber | 690-2143 | - |
Appears in Collections: | Electrical Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Cheng_princeton_0181D_12185.pdf | 2.96 MB | Adobe PDF | View/Download |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.