期刊论文详细信息
NEUROCOMPUTING 卷:259
Teaching robots to do object assembly using multi-modal 3D vision
Article
Wan, Weiwei1  Lu, Feng2,3  Wu, Zepei1  Harada, Kensuke1,4 
[1] Natl Inst Adv Ind Sci & Engn, Intelligent Syst Res Inst, Tsukuba, Ibaraki, Japan
[2] Beihang Univ, Sch Comp Sci & Engn, State Key Lab Virtual Real Technol & Syst, Beijing, Peoples R China
[3] Beihang Univ, Int Inst Multidisciplinary Sci, Beijing, Peoples R China
[4] Osaka Univ, Grad Sch Engn Sci, Suita, Osaka, Japan
关键词: 3D visual detection;    Robot manipulation;    Motion planning;   
DOI  :  10.1016/j.neucom.2017.01.077
来源: Elsevier
PDF
【 摘 要 】

The motivation of this paper is to develop an intelligent robot assembly system using multi-modal vision for next-generation industrial assembly. The system includes two phases where in the first phase human beings demonstrate assembly to robots and in the second phase robots detect objects, plan grasps, and assemble objects following human demonstration using Al searching. A notorious difficulty to implement such a system is the bad precision of 3D visual detection. This paper presents multi-modal approaches to overcome the difficulty: It uses AR markers in the teaching phase to detect human operation, and uses point clouds and geometric constraints in the robot execution phase to avoid unexpected occlusion and noises. The paper presents several experiments to examine the precision and correctness of the approaches. It demonstrates the applicability of the approaches by integrating them with graph model-based motion planning, and by executing the results on industrial robots in real-world scenarios. (C) 2017 Elsevier B.V. All rights reserved.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_neucom_2017_01_077.pdf 2840KB PDF download
  文献评价指标  
  下载次数:0次 浏览次数:0次