期刊论文详细信息
SN Applied Sciences
Depth perception in single rgb camera system using lens aperture and object size: a geometrical approach for depth estimation
P. J. A. Alphonse1  K. V. Sriharsha1 
[1] Department of Computer Applications, NIT;
关键词: Focal length;    De-focus in-focus;    Aperture number;    $${\text{f}}_{{{\text{stop}}}}$$ f stop;    Film speed;    Exposure time;   
DOI  :  10.1007/s42452-021-04212-4
来源: DOAJ
【 摘 要 】

Abstract In recent years, with increase in concern about public safety and security, human movements or action sequences are highly valued when dealing with suspicious and criminal activities. In order to estimate the position and orientation related to human movements, depth information is needed. This is obtained by fusing data obtained from multiple cameras at different viewpoints. In practice, whenever occlusion occurs in a surveillance environment, there may be a pixel-to-pixel correspondence between two images captured from two cameras and, as a result, depth information may not be accurate. Moreover use of more than one camera exclusively adds burden to the surveillance infrastructure. In this study, we present a mathematical model for acquiring object depth information using single camera by capturing the in focused portion of an object from a single image. When camera is in-focus, with the reference to camera lens center, for a fixed focal length for each aperture setting, the object distance is varied. For each aperture reading, for the corresponding distance, the object distance (or depth) is estimated by relating the three parameters namely lens aperture radius, object distance and object size in image plane. The results show that the distance computed from the relationship approximates actual with a standard error estimate of 2.39 to 2.54, when tested on Nikon and Cannon versions with an accuracy of 98.1% at 95% confidence level.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次