期刊论文详细信息
Frontiers in Psychology
Model-Free RL or Action Sequences?
article
Adam Morris1  Fiery Cushman1 
[1] Department of Psychology, Harvard University, United States
关键词: reinforcement learning;    action sequences;    model-free control;    habit;    decision-making;   
DOI  :  10.3389/fpsyg.2019.02892
学科分类:社会科学、人文和艺术(综合)
来源: Frontiers
PDF
【 摘 要 】

The alignment of habits with model-free reinforcement learning (MF RL) is a success story for computational models of decision making, and MF RL has been applied to explain phasic dopamine responses (Schultz et al., 1997 ), working memory gating (O'Reilly and Frank, 2006 ), drug addiction (Redish, 2004 ), moral intuitions (Crockett, 2013 ; Cushman, 2013 ), and more. Yet, the role of MF RL has recently been challenged by an alternate model—model-based selection of chained action sequences—that produces similar behavioral and neural patterns. Here, we present two experiments that dissociate MF RL from this prominent alternative, and present unconfounded empirical support for the role of MF RL in human decision making. Our results also demonstrate that people are simultaneously using model-based selection of action sequences, thus demonstrating two distinct mechanisms of habitual control in a common experimental paradigm. These findings clarify the nature of habits and help solidify MF RL's central position in models of human behavior.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO202108170012090ZK.pdf 4673KB PDF download
  文献评价指标  
  下载次数:0次 浏览次数:0次