期刊论文详细信息
IEEE Access
Multi-Gate Attention Network for Image Captioning
Weitao Jiang1  Haifeng Hu1  Bohong Liu1  Qiang Lu2  Xiying Li2 
[1] School of Electronic and Information Technology, Sun Yat-sen University, Guangzhou, China;School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou, China;
关键词: Image captioning;    self-attention;    transformer;    multi-gate attention;   
DOI  :  10.1109/ACCESS.2021.3067607
来源: DOAJ
【 摘 要 】

Self-attention mechanism, which has been successfully applied to current encoder-decoder framework of image captioning, is used to enhance the feature representation in the image encoder and capture the most relevant information for the language decoder. However, most existing methods will assign attention weights to all candidate vectors, which implicitly hypothesizes that all vectors are relevant. Moreover, current self-attention mechanisms ignore the intra-object attention distribution, and only consider the inter-object relationships. In this paper, we propose a Multi-Gate Attention (MGA) block, which expands the traditional self-attention by equipping with additional Attention Weight Gate (AWG) module and Self-Gated (SG) module. The former constrains the attention weights to be assigned to the most contributive objects. The latter is adopted to consider the intra-object attention distribution and eliminate the irrelevant information in object feature vector. Furthermore, most current image captioning methods apply the original transformer designed for natural language processing task, to refine image features directly. Therefore, we propose a pre-layernorm transformer to simplify the transformer architecture and make it more efficient for image feature enhancement. By integrating MGA block with pre-layernorm transformer architecture into the image encoder and AWG module into the language decoder, we present a novel Multi-Gate Attention Network (MGAN). The experiments on MS COCO dataset indicate that the MGAN outperforms most of the state-of-the-art, and further experiments on other methods combined with MGA blocks demonstrate the generalizability of our proposal.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次