Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp013x816q374
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorRusinkiewicz, Szymon-
dc.contributor.authorThuremella, Divya-
dc.date.accessioned2018-08-20T15:21:54Z-
dc.date.available2018-08-20T15:21:54Z-
dc.date.created2018-05-07-
dc.date.issued2018-08-20-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp013x816q374-
dc.description.abstractThis paper examines a unique method of semantically segmenting 3D point cloud Lidar data by transforming that data into 2D and using a dilated convolutional neural network to semantically segment the 2D transformations. Lidar data is unique because it is 3D data taken from only one view point, which means the data can be projected onto a sphere around the view point and ‘unrolled’ until it is in the form of a 2D image. This idea is used to turn the set of 3D point cloud Lidar detection frames into a set of 2D images where very little information is lost in the transformation. Then, the dilated convolutional architecture specified in Yu 2015 [1] is used to semantically segment this data. Variations on this architecture are experimented with to obtain a more efficient neural network that performs with better accuracy. Furthermore, the performance of this network was examined on 5 main classes: road, car, pedestrian, cyclist, and vegetation. Binary segmentation was performed on each individual class, and then multiclass segmentation was performed where multiple class labels were predicted together. The data for these networks was taken from 252 labeled 3D point cloud frames of the KITTI dataset. The results show that the approach of transforming the 3D data into 2D is a very effective way of performing semantic segmentation, and that this method is much more efficient than other approaches that deal with the data in 3D. Furthermore, certain trends were found when changing architecture parameters that generally lead to better results. In both the binary segmentation and multiclass segmentation, it was shown that the network performed relatively well on the road and car classes, and fairly poorly on the pedestrian and cyclist classes. The vegetation class binary segmentation network, however, could barely detect anything at all, and therefore the vegetation class was not included in the multiclass networks. Future work should be done in developing a better neural network for classifying this type of data, and a bigger and more varied dataset should be used to train the network.en_US
dc.format.mimetypeapplication/pdf-
dc.language.isoenen_US
dc.titleSemantic Segmentation of 3D Point Cloud Lidar Data for Autonomous Vehiclesen_US
dc.typePrinceton University Senior Theses-
pu.date.classyear2018en_US
pu.departmentElectrical Engineeringen_US
pu.pdf.coverpageSeniorThesisCoverPage-
pu.contributor.authorid960963735-
Appears in Collections:Electrical Engineering, 1932-2020

Files in This Item:
File Description SizeFormat 
THUREMELLA-DIVYA-THESIS.pdf2.28 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.