期刊论文详细信息
Applied Sciences
Semantic 3D Reconstruction with Learning MVS and 2D Segmentation of Aerial Images
Yao Wang1  Yisong Chen1  Hongwei Yi2  Guoping Wang2  Zizhuang Wei2 
[1] Interaction Lab, School of Electronics Engineering and Computer Sciences, Peking University, Beijing 100871, China;;Graphics &
关键词: semantic 3d reconstruction;    deep learning;    multi-view stereo;    probabilistic fusion;    graph-based refinement;   
DOI  :  10.3390/app10041275
来源: DOAJ
【 摘 要 】

Semantic modeling is a challenging task that has received widespread attention in recent years. With the help of mini Unmanned Aerial Vehicles (UAVs), multi-view high-resolution aerial images of large-scale scenes can be conveniently collected. In this paper, we propose a semantic Multi-View Stereo (MVS) method to reconstruct 3D semantic models from 2D images. Firstly, 2D semantic probability distribution is obtained by Convolutional Neural Network (CNN). Secondly, the calibrated cameras poses are determined by Structure from Motion (SfM), while the depth maps are estimated by learning MVS. Combining 2D segmentation and 3D geometry information, dense point clouds with semantic labels are generated by a probability-based semantic fusion method. In the final stage, the coarse 3D semantic point cloud is optimized by both local and global refinements. By making full use of the multi-view consistency, the proposed method efficiently produces a fine-level 3D semantic point cloud. The experimental result evaluated by re-projection maps achieves 88.4% Pixel Accuracy on the Urban Drone Dataset (UDD). In conclusion, our graph-based semantic fusion procedure and refinement based on local and global information can suppress and reduce the re-projection error.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次