| NEUROCOMPUTING | 卷:423 |
| Causality extraction based on self-attentive BiLSTM-CRF with transferred embeddings | |
| Article | |
| Li, Zhaoning1  Li, Qi1  Zou, Xiaotian1  Ren, Jiangtao1  | |
| [1] Sun Yat Sen Univ, Guangdong Prov Key Lab Computat Sci, Sch Data & Comp Sci, Guangzhou 510006, Guangdong, Peoples R China | |
| 关键词: Causality extraction; Sequence labeling; BiLSTM-CRF; Flair embeddings; Self-attention; | |
| DOI : 10.1016/j.neucom.2020.08.078 | |
| 来源: Elsevier | |
PDF
|
|
【 摘 要 】
Causality extraction from natural language texts is a challenging open problem in artificial intelligence. Existing methods utilize patterns, constraints, and machine learning techniques to extract causality, heavily depending on domain knowledge and requiring considerable human effort and time for feature engineering. In this paper, we formulate causality extraction as a sequence labeling problem based on a novel causality tagging scheme. On this basis, we propose a neural causality extractor with the BiLSTM-CRF model as the backbone, named SCITE (Self-attentive BiLSTM-CRF wIth Transferred Embeddings), which can directly extract cause and effect without extracting candidate causal pairs and identifying their relations separately. To address the problem of data insufficiency, we transfer contextual string embeddings, also known as Flair embeddings, which are trained on a large corpus in our task. In addition, to improve the performance of causality extraction, we introduce a multihead selfattention mechanism into SCITE to learn the dependencies between causal words. We evaluate our method on a public dataset, and experimental results demonstrate that our method achieves significant and consistent improvement compared to baselines. (c) 2020 Elsevier B.V. All rights reserved.
【 授权许可】
Free
【 预 览 】
| Files | Size | Format | View |
|---|---|---|---|
| 10_1016_j_neucom_2020_08_078.pdf | 2083KB |
PDF