Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01dz010s764
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Wentzlaff, David | - |
dc.contributor.author | Zhou, Yanqi | - |
dc.contributor.other | Electrical Engineering Department | - |
dc.date.accessioned | 2018-06-12T17:43:12Z | - |
dc.date.available | 2018-06-12T17:43:12Z | - |
dc.date.issued | 2018 | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/dsp01dz010s764 | - |
dc.description.abstract | Business and Academics are increasingly turning to Infrastructure as a Service (IaaS) clouds such as Amazon’s EC2 to fulfill their computing and storage needs. Unfortunately, computational resources are provisioned and charged in a bundled fashion where customers can only choose an integer number of CPUs and GPUs, and a certain amount of memory and disk to be included into their Virtual Machine configurations. Current cloud customers are not able to choose fine-grain resources such as functional units within a core, cache size, hardware accelerators, and memory bandwidth, even though their diverse applications and evaluation criteria need different configurations of those resources. The lack of highly configurable architecture and fine-grain resource provisioning leads to a less economically efficient market where the customers are overpaying or underutilizing their resources and cloud providers are over-provisioning or wasting revenue opportunities. In order to provide better infrastructure for higher cloud economic efficiency, we designed, implemented, and evaluated a highly configurable architecture and resource provisioning mechanism for IaaS cloud. This work debundles hardware resources into subcore and sub-accelerator units and connects the sea of fine-grain resources with several switched on-chip networks. A distributed hardware mechanism shapes memory transaction inter-arrival time into a predetermined distribution on a per-core/per-thread basis. The disentangling of resources enables renting out fine-grain resources (e.g., ALUs, last-level cache, sub-accelerator components, memory bandwidth) flexibly. As a result, cloud customers can choose variable inter-core and inter-accelerator parameters and compose arbitrary virtual core configurations according to applications’ needs. To help cloud customers understand the tradeoff space and determine their virtual core configurations, a runtime system co-designed with the sub-core configurable architecture is designed to dynamically configure the hardware by optimizing the tunable parameters in order to the meet QoS requirement and minimize cost. Overall, this work encompasses various aspects of a cloud system, providing fine-grain and economically efficient solutions for IaaS clouds. | - |
dc.language.iso | en | - |
dc.publisher | Princeton, NJ : Princeton University | - |
dc.relation.isformatof | The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a> | - |
dc.subject.classification | Computer engineering | - |
dc.title | Configurable Architecture and Resource Provisioning for Future Clouds | - |
dc.type | Academic dissertations (Ph.D.) | - |
pu.projectgrantnumber | 690-2143 | - |
Appears in Collections: | Electrical Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Zhou_princeton_0181D_12616.pdf | 8.4 MB | Adobe PDF | View/Download |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.