学位论文详细信息
3D sensing and mapping using mobile color and depth sensors
Depth cameras;Calibration;Object proposals;Image stitching;Cylindrical image
Pahwa, Ramanpreet Singh
关键词: Depth cameras;    Calibration;    Object proposals;    Image stitching;    Cylindrical image;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/98125/PAHWA-DISSERTATION-2017.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】
An important recent development in the visual information acquisition field is the emergence of low cost depth sensors that measure the scalar distance between the camera and the surrounding objects present in the scene. These depth sensors project infrared rays that are invisible to humans to measure the scene distance at each pixel. These sensors already have had a significant impact on various fields such as computer vision, gaming, augmented reality, and robotic vision. However, like every new technology, depth cameras also suffer from severe limitations such as low resolution, significant noise, lens distortion, and inability to work in outdoor environments. Depth cameras need to be calibrated accurately before they can be used along with color cameras to perform various tasks such as 3D reconstruction, action recognition, scene sensing and augmented virtual reality. This thesis investigates novel methods to measure and correct for these distortions and use the denoised measurements for various applications in vision related fields. In particular, we tackle the following problems:First, we propose a novel algorithm that takes in few depth images and utilizes them to simultaneously denoise and calibrate time-of-flight based depth cameras. Our formulation is based on two key elements. We first use depth planarization in 3D to denoise the depth at each corner pixel. Thereafter, we use these improved depth measurements along with the corner pixel information to estimate the calibration parameters using a non-linear estimation algorithm. We demonstrate that our framework estimates the intrinsic and extrinsic calibration parameters more accurately using fewer images and corners than are needed for traditional camera calibration. We evaluate our approach on both a synthetic dataset where ground truth information is available, and real data taken from a photon mixing device (PMD) camera. In both cases, we demonstrate that our proposed framework outperforms traditional calibration technique without significant increase in computational complexity.Second, we use the depth information provided by such cameras along with the color information for 3D object proposals in a given scene. We use a generic 2D object proposal technique as an input to our system and perform depth based filtering to create a heatmap of each frame by exploiting the scene geometry. We further use these heatmaps to remove any supporting planes present in the scene. Thereafter, we fuse the heatmaps of each frame in 3D using camera pose to build a 3D point cloud of the scene and assign each point a ranking based on its importance in the scene. We perform density based clustering on these top ranked points to compute precise 3D bounding boxes in the scene that have a high probability of containing an object of interest.Third, we integrate depth sensors and external geometry of the scene to robustly stitch images captured in a cylindrical tunnel where the camera moves forward in a spiral fashion. We utilize structure-from-motion (SfM) to estimate camera pose between adjacent frames. We exploit scene geometry to identify outliers among matching points and use bundle adjustment (BA) to improve the camera pose. We use depth sensors attached to the color camera to estimate the camera’s translation. Thereafter, we create an immersive 3D display in Unity 3D rendering engine to display the stitched scenes in a cylindrical projection where the user can fly through the scene using keyboard and mouse controls.In the future, we intend to improve bundle adjustment for automatic stitching of tunnel-like scenes by exploiting the known geometry of the scene to make it more robust to outliers.
【 预 览 】
附件列表
Files Size Format View
3D sensing and mapping using mobile color and depth sensors 100446KB PDF download
  文献评价指标  
  下载次数:3次 浏览次数:10次