Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp0144558h265
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorFinkelstein, Adam-
dc.contributor.advisorFinkelstein, Adam-
dc.contributor.advisorFinkelstein, Adam-
dc.contributor.advisorBesler, Erin D-
dc.contributor.authorUberoy, Urvashi-
dc.date.accessioned2020-08-12T14:45:22Z-
dc.date.available2020-08-12T14:45:22Z-
dc.date.created2020-05-03-
dc.date.issued2020-08-12-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp0144558h265-
dc.description.abstractPrior urban studies initiatives have used observational methods to capture and model human movement in public spaces, resulting in static visualizations that highlight generalizable behaviors and trends. Motivated by the limitations of static representations to portray dynamic movement, this thesis presents a proof-of-concept tool that uses a data-driven approach to learn and simulate human movement in urban public spaces. Human movement is learned with a reinforcement learning model proposed by Kitani et al. in “Activity Forecasting” (2012). This model looks at human interactions with the static features of a scene -- buildings, cars, grass, etc. -- to predict paths of movement. I retrain this model with videos of pedestrian-friendly scenes to get a set of feature weights that convey the influence of each static feature on human movement. With these feature weights, I feed sample images of public spaces into an Optimal Control (OC) model that forecasts a trajectory between a specified source and destination. The user also has the ability to paint over the image to add in additional static features to see how the predicted trajectories change accordingly. I use an adapted form of Dijkstra’s Shortest Path algorithm (DSP) to find the maximum likelihood single-line path from source to destination. I stitch motion capture figures from CMU's MoCap dataset along the path to simulate this movement. My findings reveal that retraining the OC model with more pedestrian-friendly scenes improves the model’s sensitivity to static features like “grass” and “pavement” while decreasing its sensitivity to features like “car.” Further, an analysis of my path-finding algorithm shows that the overall cost of the paths outputted by DSP is lower than that of straight-line paths, highlighting the plausibility of the simulated patterns of movement. This tool thus presents an al- ternate, dynamic form of spatial representation for architects and urban planners, having the potential to increase understanding of human interaction with spaces that could lead to increasingly people-sensitive built environments.en_US
dc.format.mimetypeapplication/pdf-
dc.language.isoenen_US
dc.titleTEXTen_US
dc.titleLearning and Simulating Human Movement in Public Spacesen_US
dc.titleTEXTen_US
dc.typePrinceton University Senior Theses-
pu.date.classyear2020en_US
pu.departmentComputer Scienceen_US
pu.pdf.coverpageSeniorThesisCoverPage-
pu.contributor.authorid920068419-
pu.certificateArchitecture and Engineering Programen_US
Appears in Collections:Computer Science, 1988-2020

Files in This Item:
File Description SizeFormat 
UBEROY-URVASHI-THESIS.pdf13.1 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.