期刊论文详细信息
Frontiers in Psychology
Pragmatically Framed Cross-Situational Noun Learning Using Computational Reinforcement Models
Shamima Najnin1 
关键词: cross-situational learning;    deep reinforcement learning;    Q-learning;    neural network;    joint attention;    prosodic cue;   
DOI  :  10.3389/fpsyg.2018.00005
学科分类:心理学(综合)
来源: Frontiers
PDF
【 摘 要 】

Cross-situational learning and social pragmatic theories are prominent mechanisms for learning word meanings (i.e., word-object pairs). In this paper, the role of reinforcement is investigated for early word-learning by an artificial agent. When exposed to a group of speakers, the agent comes to understand an initial set of vocabulary items belonging to the language used by the group. Both cross-situational learning and social pragmatic theory are taken into account. As social cues, joint attention and prosodic cues in caregiver's speech are considered. During agent-caregiver interaction, the agent selects a word from the caregiver's utterance and learns the relations between that word and the objects in its visual environment. The “novel words to novel objects” language-specific constraint is assumed for computing rewards. The models are learned by maximizing the expected reward using reinforcement learning algorithms [i.e., table-based algorithms: Q-learning, SARSA, SARSA-λ, and neural network-based algorithms: Q-learning for neural network (Q-NN), neural-fitted Q-network (NFQ), and deep Q-network (DQN)]. Neural network-based reinforcement learning models are chosen over table-based models for better generalization and quicker convergence. Simulations are carried out using mother-infant interaction CHILDES dataset for learning word-object pairings. Reinforcement is modeled in two cross-situational learning cases: (1) with joint attention (Attentional models), and (2) with joint attention and prosodic cues (Attentional-prosodic models). Attentional-prosodic models manifest superior performance to Attentional ones for the task of word-learning. The Attentional-prosodic DQN outperforms existing word-learning models for the same task.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO201901229367871ZK.pdf 9521KB PDF download
  文献评价指标  
  下载次数:4次 浏览次数:6次