Applied Sciences | |
Semantic 3D Reconstruction for Robotic Manipulators with an Eye-In-Hand Vision System | |
Xin Wang1  Wei Guo2  Yu Fu2  Fusheng Zha2  Hegao Cai2  Pengfei Wang2  Mantian Li2  | |
[1] Shenzhen Academy of Aerospace Technology, Shenzhen 518057, China;State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150080, China; | |
关键词: semantic 3d reconstruction; eye-in-hand vision system; robotic manipulator; | |
DOI : 10.3390/app10031183 | |
来源: DOAJ |
【 摘 要 】
Three-dimensional reconstruction and semantic understandings have attracted extensive attention in recent years. However, current reconstruction techniques mainly target large-scale scenes, such as an indoor environment or automatic self-driving cars. There are few studies on small-scale and high-precision scene reconstruction for manipulator operation, which plays an essential role in the decision-making and intelligent control system. In this paper, a group of images captured from an eye-in-hand vision system carried on a robotic manipulator are segmented by deep learning and geometric features and create a semantic 3D reconstruction using a map stitching method. The results demonstrate that the quality of segmented images and the precision of semantic 3D reconstruction are effectively improved by our method.
【 授权许可】
Unknown