Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp01vx021h833
Title: DeepAlign: Learning Optical Flow for Image Alignment
Authors: Mitchell, Eric
Advisors: Seung, H. Sebastian
Department: Computer Science
Class Year: 2018
Abstract: In this work, we present DeepAlign, a new neural network architecture and training paradigm for learning optical flow for the application of image alignment. The pri- mary novelty of our architecture is the replacement of naive downsampling in other spatial pyramid network architectures such as SpyNet [23, 30] with a hierarchical encoding pathway that we propose generates more information-dense inputs for flow estimation. We demonstrate DeepAlign to be superior in performance to similar architectures for learning optical flow on our particular image alignment task, the alignment of large volumes of electron microscopy (EM) imagery. This type of large- scale alignment task is a common preprocessing step for many biomedical research efforts, and a fast, accurate method for this task would have broad applicability. We quantitatively and qualitatively compare the quality of our system with SpyNet, as well as with a standard non-deep learning approach, called Alembic, which has been demonstrated to be effective at large-scale EM alignment. We show promising results for DeepAlign on the Pinky40 EM dataset, with superior average-case performance to both Alembic and SpyNet. However, we also note that the worst-case performance of DeepAlign, though superior to SpyNet, is not yet comparable enough to Alembic to warrant its replacement. We conclude with several suggestions for how to close this performance gap.
URI: http://arks.princeton.edu/ark:/88435/dsp01vx021h833
Type of Material: Princeton University Senior Theses
Language: en
Appears in Collections:Computer Science, 1988-2020

Files in This Item:
File Description SizeFormat 
MITCHELL-ERIC-THESIS.pdf5.27 MBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.