Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01vm40xr68f
Title: Peering into Peer Assessment: An Investigation of the Reliability of Peer Assessment in MOOCs
Authors: Tsai, Paige
Advisors: Oppenheimer, Daniel
Contributors: Cooper, Joel
Department: Psychology
Class Year: 2013
Abstract: The present research investigates the reliability of peer assessment methods currently employed by massive online open course platforms (e.g. Coursera). Previous research (Snow et al., 2012) on crowdsourcing has examined the grading of tasks for which there are clear or objective answers (e.g. word sense disambiguation tasks). This study seeks to empirically verify online educators’ claim that the findings from crowdsourcing literature can be generalized to apply to the grading of essays in an academic context. A sample of essays (N = 337) submitted for the Coursera class History of the World since 1300 was analyzed. Part 1 sought to replicate findings that suggest that the aggregate of multiple student grades approximates the judgment of a single expert grader. In Part 2 sought to discover what the primary drivers of peer and expert assessments were, with the ultimate goal of determining whether the relationship observed in part 1 was spurious. Our analyses revealed that while peer grades tend to recapitulate expert grades for the sample of essays, the correlation between peer and expert grades is spurious.
Extent: 68 pages
URI: http://arks.princeton.edu/ark:/88435/dsp01vm40xr68f
Access Restrictions: Walk-in Access. This thesis can only be viewed on computer terminals at the Mudd Manuscript Library.
Type of Material: Princeton University Senior Theses
Language: en_US
Appears in Collections:Psychology, 1930-2020

Files in This Item:
File SizeFormat 
Tsai_paige_thesis.pdf1.07 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.