Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01v405sc84s
Title: | Modeling Uncertainty in Stereo Vision for Precise and Robust State Estimation |
Authors: | Lee, Michael S. |
Advisors: | Martinelli, Luigi Michaels, Nathan |
Department: | Mechanical and Aerospace Engineering |
Class Year: | 2016 |
Abstract: | This thesis develops a holistic framework for modeling uncertainty in stereo vision in pursuit of precise and robust state estimation through visual odometry. Precision is studied in the context of the fine-grain mathematical formulations that govern how a 3D point in the world is transformed into a 2D image pixel. Standard stochastic models assume that the positional error of all points on an image have equal variance, a property known as homoscedastity. This assumption is valid for classical cameras that can be modeled as pinhole cameras that follow a perspective projection model but is insufficient for fisheye cameras that suffer from significant radial lens distortion for points viewed far away from the optical axis [21]. To update the positional uncertainty model for fisheye cameras, the equi-distance projection model and an epipolar rectification model are first developed to allow a fisheye camera to be used interchangeably with classical cameras in visual odometry pipelines. Then, using the geometry of the epipolar rectification model, an improved stochastic model is proposed that introduces an additional variance factor for positional uncertainty that is dependent on the point’s radial distance from the optical axis. Lastly, a MATLAB simulation environment is created to test the epipolar rectification and the improved stochastic models by estimating the normal directions of mutually orthogonal planes using thirty photo captures from a stereo camera setup. On the other hand, robustness is studied in the context of factors that lie out-side of the 3D point-camera relationship, such as insufficient illumination, variable lighting, and motion blur that manifest in the image space. In particular, this the-sis aims to answer the following question: is it possible to detect when the quality of incoming information degrades to the point where a visual odometry algorithm is bound to fail? Visual saliency, defined as conspicuity in a visual field that arises from center-surround contrast, is taken to be the actionable information correlated to visual odometry performance. Three studies are designed to help answer the over-aching research question. First, the ability of saliency to predict visual odometry performace is verified using three characteristic instances of failure. Second, various metrics concerning saliency, luminance, and motion blur are engineered to capture common failure modalities. Their abilities to distinguish among the modalities are tested by obtaining and comparing a representative visual metric profile for each modality. And lastly, random forests are trained using the visual metrics of historical visual odometry performance to detect and identify occurences of failure modalities. |
Extent: | 59 pages |
URI: | http://arks.princeton.edu/ark:/88435/dsp01v405sc84s |
Type of Material: | Princeton University Senior Theses |
Language: | en_US |
Appears in Collections: | Mechanical and Aerospace Engineering, 1924-2019 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
ORIGINAL | 1.59 MB | Adobe PDF | Request a copy |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.