Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp016q182k292
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorWalker, David Pen_US
dc.contributor.authorBell, Christian Jamesen_US
dc.contributor.otherComputer Science Departmenten_US
dc.date.accessioned2014-06-05T19:45:53Z-
dc.date.available2014-06-05T19:45:53Z-
dc.date.issued2014en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp016q182k292-
dc.description.abstractThe microprocessor industry has embraced multicore architectures as the new dominant design paradigm. Harnessing the full power of such computers requires writing multithreaded programs, but regardless of whether one is writing a program from scratch or porting an existing single-threaded program, concurrency is hard to implement correctly and often reduces the legibility and maintainability of the source code. Single-threaded programs are easier to write, understand, and verify. Parallelizing compilers offer one solution by automatically transforming sequential programs into parallel programs. Assisting the programmer with challenging tasks like this (or other optimizations), however, causes compilers to be highly complex. This leads to bugs that add unexpected behaviors to compiled programs in ways that are very difficult to test. Formal compiler verification adds a rigorous mathematical proof of correctness to a compiler, which provides high assurance that successfully compiled programs preserve the behaviors of the source program such that bugs are not introduced. However, no parallelizing compiler has been formally verified. We lay the groundwork for verified parallelizing compilers by developing a general theory to prove the soundness of parallelizing transformations. Using this theory, we prove the soundness of a framework of small, generic transformations that compose together to build optimizations that are correct by construction. We demonstrate it by implementing several classic and cutting-edge loop-parallelizing optimizations: DOALL, DOACROSS, and Decoupled Software Pipelining. Two of our main contributions are the development and proof of a general parallelizing transformation and a transformation that coinductively folds a transformation over a potentially nonterminating loop, which we compose together to parallelize loops. Our third contribution is an exploration of the theory behind the correctness of parallelization, where we consider the preservation of nondeterminism and develop bisimulation-based proof techniques. Our proofs have been mechanically checked by the Coq Proof Assistant.en_US
dc.language.isoenen_US
dc.publisherPrinceton, NJ : Princeton Universityen_US
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the <a href=http://catalog.princeton.edu> library's main catalog </a>en_US
dc.subjectoptimizationsen_US
dc.subjectparallelizationen_US
dc.subjectprogramming languagesen_US
dc.subjectverified compilersen_US
dc.subject.classificationComputer scienceen_US
dc.titleA Proof Theory for Loop-Parallelizing Transformationsen_US
dc.typeAcademic dissertations (Ph.D.)en_US
pu.projectgrantnumber690-2143en_US
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Bell_princeton_0181D_10894.pdf1.32 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.