期刊论文详细信息
Frontiers in Psychology
Using Neural Networks to Generate Inferential Roles for Natural Language
Peter Blouw1 
关键词: natural language inference;    recursive neural networks;    language comprehension;    semantics;   
DOI  :  10.3389/fpsyg.2017.02335
学科分类:心理学(综合)
来源: Frontiers
PDF
【 摘 要 】

Neural networks have long been used to study linguistic phenomena spanning the domains of phonology, morphology, syntax, and semantics. Of these domains, semantics is somewhat unique in that there is little clarity concerning what a model needs to be able to do in order to provide an account of how the meanings of complex linguistic expressions, such as sentences, are understood. We argue that one thing such models need to be able to do is generate predictions about which further sentences are likely to follow from a given sentence; these define the sentence's “inferential role.” We then show that it is possible to train a tree-structured neural network model to generate very simple examples of such inferential roles using the recently released Stanford Natural Language Inference (SNLI) dataset. On an empirical front, we evaluate the performance of this model by reporting entailment prediction accuracies on a set of test sentences not present in the training data. We also report the results of a simple study that compares human plausibility ratings for both human-generated and model-generated entailments for a random selection of sentences in this test set. On a more theoretical front, we argue in favor of a revision to some common assumptions about semantics: understanding a linguistic expression is not only a matter of mapping it onto a representation that somehow constitutes its meaning; rather, understanding a linguistic expression is mainly a matter of being able to draw certain inferences. Inference should accordingly be at the core of any model of semantic cognition.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO201901220230336ZK.pdf 962KB PDF download
  文献评价指标  
  下载次数:0次 浏览次数:5次