Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp016969z3194
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorPowell, Warren B.-
dc.contributor.authorJiang, Daniel Ruoling-
dc.contributor.otherOperations Research and Financial Engineering Department-
dc.date.accessioned2016-06-08T18:40:51Z-
dc.date.available2016-06-08T18:40:51Z-
dc.date.issued2016-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp016969z3194-
dc.description.abstractIn this thesis, we propose approximate dynamic programming (ADP) methods for solving risk-neutral and risk-averse sequential decision problems under uncertainty, focusing on models that are intractable under traditional techniques. Though "intractability" can take numerous forms, we examine two specific situations that are encountered by many practical problems: (1) when the size of the state space causes exact solution methods to be computationally prohibitive and (2) when models of the underlying stochastic processes are either unknown or too complex for expectations or risks to be evaluated exactly. For the risk-neutral case, we consider the structural property of monotonicity in the value function, which arises in any application where "more is better." We present a provably convergent algorithm that exploits the monotone structure of the problem in order to obtain near-optimal policies using a relatively small amount of computation (when compared to exact techniques). The second algorithm that we contribute is a method to obtain near-optimal policies for sequential problems specified under a dynamic, quantile-based risk measure. In addition, we address the issue of inefficient sampling in risk-averse situations and propose a companion procedure to direct samples to the "risky region." Both the risk-neutral and risk-averse ADP methods are inherently simulation-based, but it is important to note that they can be particularly useful in data-driven settings where distributions are not known. Finally, our proposed techniques are applied to risk-neutral and risk-averse versions of an energy bidding problem, where one bids into an hour-ahead market with the goal of "trading" physical energy, i.e., performing energy arbitrage. Such a study has important implications in understanding the valuation of energy storage.-
dc.language.isoen-
dc.publisherPrinceton, NJ : Princeton University-
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: http://catalog.princeton.edu/-
dc.subjectapproximate dynamic programming-
dc.subjectenergy bidding-
dc.subjectmonotonicity-
dc.subjectrisk averse-
dc.subject.classificationOperations research-
dc.titleRisk-Neutral and Risk-Averse Approximate Dynamic Programming Methods-
dc.typeAcademic dissertations (Ph.D.)-
pu.projectgrantnumber690-2143-
Appears in Collections:Operations Research and Financial Engineering

Files in This Item:
File Description SizeFormat 
Jiang_princeton_0181D_11711.pdf2.9 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.