IEEE Access | |
Learning to Fuse Multiscale Features for Visual Place Recognition | |
Lilian Zhang1  Xiaoping Hu1  Xiaofeng He1  Jun Mao1  Michael J. Milford2  Liao Wu2  | |
[1] Department of Automation, National University of Defense Technology, Changsha, China;School of Electrical Engineering and Computer Science, Queensland University of Technology, Brisbane, QLD, Australia; | |
关键词: Visual place recognition; deep learning; mobile robots; localization; | |
DOI : 10.1109/ACCESS.2018.2889030 | |
来源: DOAJ |
【 摘 要 】
Efficient and robust visual place recognition is of great importance to autonomous mobile robots. Recent work has shown that features learned from convolutional neural networks achieve impressed performance with efficient feature size, where most of them are pooled or aggregated from a convolutional feature map. However, convolutional filters only capture the appearance of their perceptive fields, which lack the considerations on how to combine the multiscale appearance for place recognition. In this paper, we propose a novel method to build a multiscale feature pyramid and present two approaches to use the pyramid to augment the place recognition capability. The first approach fuses the pyramid to obtain a new feature map, which has an awareness of both the local and semi-global appearance, and the second approach learns an attention model from the feature pyramid to weight the spatial grids on the original feature map. Both approaches combine the multiscale features in the pyramid to suppress the confusing local features while tackling the problem in two different ways. Extensive experiments have been conducted on benchmark datasets with varying degrees of appearance and viewpoint variations. The results show that the proposed approaches achieve superior performance over the networks without the multiscale feature fusion and the multiscale attention components. Analyses on the performance of using different feature pyramids are also provided.
【 授权许可】
Unknown