期刊论文详细信息
PATTERN RECOGNITION 卷:31
A moment-preserving approach for depth from defocus
Article
Tsai, DM ; Lin, CT
关键词: depth from defocus;    range sensing;    moment-preserving;    neural networks;   
DOI  :  10.1016/S0031-3203(97)00068-X
来源: Elsevier
PDF
【 摘 要 】

For range sensing using depth-from-defocus methods, the distance D of a point object from the lens can be evaluated by the concise depth formula D = P/(Q - d(b)), where P and Q are constants for a given camera setting and d(b) is the diameter of the blur circle for the point object on the image detector plane. The amount of defocus d(b) is traditionally estimated from the spatial parameter of a Gaussian point spread function using a complex iterative solution. In this paper, we use a straightforward and computationally fast method to estimate the amount of defocus from a single camera The observed gray-level image is initially converted into a gradient image using the Sobel edge operator. For the edge point of interest, the proportion of the blurred edge region p(e) in a small neighborhood window is then calculated using the moment-preserving technique. The value of p(e) increases as the amount of defocus increases and; therefore, is used as the description of degradation of the point-spread function. In addition to the use of the geometric depth formula for depth estimation, artificial neural networks are also proposed in this study to compensate for the estimation errors from the depth formula. Experiments have shown promising results that the RMS depth errors are within 5% for the depth formula, and within 2% for the neural networks. (C) 1998 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_S0031-3203(97)00068-X.pdf 274KB PDF download
  文献评价指标  
  下载次数:0次 浏览次数:0次