期刊论文详细信息
Sensors
Multi-View Visual Question Answering with Active Viewpoint Selection
Yue Qiu1  Yutaka Satoh1  Kenji Iwata2  Hirokatsu Kataoka2  Ryota Suzuki2 
[1] Graduate School of Science and Technology, University of Tsukuba, Tsukuba 305-8577, Japan;National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba 305-8560, Japan;
关键词: visual question answering;    three-dimensional (3D) vision;    reinforcement learning;    deep learning;    human–robot interaction;   
DOI  :  10.3390/s20082281
来源: DOAJ
【 摘 要 】

This paper proposes a framework that allows the observation of a scene iteratively to answer a given question about the scene. Conventional visual question answering (VQA) methods are designed to answer given questions based on single-view images. However, in real-world applications, such as human–robot interaction (HRI), in which camera angles and occluded scenes must be considered, answering questions based on single-view images might be difficult. Since HRI applications make it possible to observe a scene from multiple viewpoints, it is reasonable to discuss the VQA task in multi-view settings. In addition, because it is usually challenging to observe a scene from arbitrary viewpoints, we designed a framework that allows the observation of a scene actively until the necessary scene information to answer a given question is obtained. The proposed framework achieves comparable performance to a state-of-the-art method in question answering and simultaneously decreases the number of required observation viewpoints by a significant margin. Additionally, we found our framework plausibly learned to choose better viewpoints for answering questions, lowering the required number of camera movements. Moreover, we built a multi-view VQA dataset based on real images. The proposed framework shows high accuracy (94.01%) for the unseen real image dataset.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次