Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01pv63g290r
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Wentzlaff, David | - |
dc.contributor.author | Fu, Yaosheng | - |
dc.contributor.other | Electrical Engineering Department | - |
dc.date.accessioned | 2017-12-12T19:14:50Z | - |
dc.date.available | 2017-12-12T19:14:50Z | - |
dc.date.issued | 2017 | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/dsp01pv63g290r | - |
dc.description.abstract | Modern CPUs, GPUs, and data centers are being built with more and more cores. Many popular workloads will require even more hardware parallelism in the future. Shared memory is a popular parallel programming model with many advantages, but it is historically difficult to scale to a large number of cores/nodes. This thesis investigates hardware and software techniques that enable shared memory systems to scale. To be specific, this work focuses on improving two key challenges of large-scale shared memory systems: scalability and fault-tolerance. The primary scalability challenge of shared memory systems is the need to maintain cache coherence across all cores/nodes, which is difficult at scale. The fault-tolerance challenge arises mainly for distributed shared memory (DSM) systems because they are usually tightly integrated and thus do not provide good fault isolation between nodes. In order to solve those challenges, this thesis first develops a parallel simulator named PriME to simulate shared memory systems at scale. PriME is a parallel and distributed simulator that supports multi-threaded workloads as well as multi-programmed workloads. To address scalability challenges, this thesis introduces Coherence Domain Restriction (CDR) as a cache coherence framework that sidesteps traditional scalability challenges and enables systems to scale to thousands of cores within a manycore chip or millions of cores across the entire data center. The entire CDR framework has been implemented on the 25-core Princeton Piton processor. For fault-tolerance, this thesis has developed both a software-centric solution with resilient memory operations (REMO) and a hardware-centric solution with a fault-tolerant cache coherence framework (FTCC). REMO is a set of load and store instructions that can return faults that programmers can select to handle. REMO provides fault isolation in DSM systems, thereby enabling them to scale without sacrificing resilience. On the other hand, FTCC is a fault-tolerant cache coherence framework that extends DSM systems with native fault-tolerant ability in hardware without hurting their performance advantages. In sum, this thesis demonstrates that shared memory systems have the potential to achieve comparable scalability and fault-tolerance ability as current cluster-based designs while still maintaining other benefits such as ease of programming and efficient memory accesses. | - |
dc.language.iso | en | - |
dc.publisher | Princeton, NJ : Princeton University | - |
dc.relation.isformatof | The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a> | - |
dc.subject | Cache Coherence | - |
dc.subject | Distributed Systems | - |
dc.subject | Fault Tolerance | - |
dc.subject | Parallel Simulator | - |
dc.subject | Shared Memory | - |
dc.subject.classification | Computer engineering | - |
dc.subject.classification | Electrical engineering | - |
dc.subject.classification | Computer science | - |
dc.title | Architectural Support for Large-scale Shared Memory Systems | - |
dc.type | Academic dissertations (Ph.D.) | - |
pu.projectgrantnumber | 690-2143 | - |
Appears in Collections: | Electrical Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Fu_princeton_0181D_12333.pdf | 7.25 MB | Adobe PDF | View/Download |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.