期刊论文详细信息
IEEE Access
Chinese Relation Extraction Using Extend Softword
Bo Kong1  Liruizhi Jia1  Fuyuan Wei2  Guangyao Wang3  Shengquan Liu3 
[1] Key Laboratory of Signal Detection and Processing in Xinjiang Uygur Autonomous Region, &x00DC;mqi, China;r&x00FC;
关键词: Information extraction;    chinese relation extraction;    BLSTM;    BERT;   
DOI  :  10.1109/ACCESS.2021.3102225
来源: DOAJ
【 摘 要 】

In recent years, many scholars have chosen to use word lexicons to incorporate word information into a model based on character input to improve the performance of Chinese relation extraction (RE). For example, Li et al. proposed the MG-Lattice model in 2019 and achieved state-of-the-art (SOTA) results. However, MG-Lattice still has the problem of information loss due to its model structure, which affects the performance of Chinese RE. This paper proposes an adaptive method to include word information at the embedding layer using a word lexicon to merge all words that match each character into a character input-based model to solve the information loss problem of MG-Lattice. The method can be combined with other general neural system networks and has transferability. Experimental studies on two benchmark Chinese RE datasets show that our method achieves an inference speed up to 12.9 times faster than the SOTA model, along with a better performance. The experimental results also show that this method combined with the BERT pretrained model can effectively supplement the information obtained from the pretrained model, further improving the performance of Chinese RE.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次