Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp010k225d672
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorMartonosi, Margaret R.-
dc.contributor.authorWhite, Joseph-
dc.date.accessioned2017-07-20T13:13:17Z-
dc.date.available2017-07-20T13:13:17Z-
dc.date.created2017-06-02-
dc.date.issued2017-6-2-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp010k225d672-
dc.description.abstractCourse evaluations at Princeton lack numerical difficulty measures, but the textual evaluations are rich with student commentary about course difficulty. This suggests that there is an opportunity to generate a numerical difficulty score from the textual evaluations. In this project, we collect a data set via scraping and test several variations of a “bag of words” approach with the goal of determining whether such an approach holds promise as a difficulty measure. These variations are evaluated by testing their correlation with the number of pages of weekly reading assigned in a course. Surprisingly, the correlation is negative, perhaps because STEM courses are perceived as difficult and are prone to lighter reading loads. As a result, a research agenda is suggested for similar tools that can complement human academic advisors in helping students to choose better schedules.en_US
dc.language.isoen_USen_US
dc.titleExperimental Measures of Difficulty for Princeton Coursesen_US
dc.typePrinceton University Senior Theses-
pu.date.classyear2017en_US
pu.departmentComputer Scienceen_US
pu.pdf.coverpageSeniorThesisCoverPage-
pu.contributor.authorid960889439-
pu.contributor.advisorid010053117-
Appears in Collections:Computer Science, 1988-2020

Files in This Item:
File SizeFormat 
written_final_report(1).pdf413.42 kBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.