PLoS One | |
Credit Assignment during Movement Reinforcement Learning | |
Gregory Dam1  Konrad Kording2  Kunlin Wei3  | |
[1] Department of Behavioral Sciences, University of Rio Grande, Rio Grande, Ohio, United States of America;Department of Physical Medicine and Rehabilitation, Rehabilitation Institute of Chicago, Northwestern University, Chicago, Illinois, United States of America;Department of Psychology, Key Laboratory of Machine Perception (Ministry of Education), Beijing Engineering Research Center of Intelligent Rehabilitation Engineering, Peking University, Beijing, China | |
关键词: Human learning; Learning; Musculoskeletal system; Learning curves; Nervous system; Behavior; Memory; Robots; | |
DOI : 10.1371/journal.pone.0055352 | |
学科分类:医学(综合) | |
来源: Public Library of Science | |
【 摘 要 】
We often need to learn how to move based on a single performance measure that reflects the overall success of our movements. However, movements have many properties, such as their trajectories, speeds and timing of end-points, thus the brain needs to decide which properties of movements should be improved; it needs to solve the credit assignment problem. Currently, little is known about how humans solve credit assignment problems in the context of reinforcement learning. Here we tested how human participants solve such problems during a trajectory-learning task. Without an explicitly-defined target movement, participants made hand reaches and received monetary rewards as feedback on a trial-by-trial basis. The curvature and direction of the attempted reach trajectories determined the monetary rewards received in a manner that can be manipulated experimentally. Based on the history of action-reward pairs, participants quickly solved the credit assignment problem and learned the implicit payoff function. A Bayesian credit-assignment model with built-in forgetting accurately predicts their trial-by-trial learning.
【 授权许可】
CC BY
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO201904026858965ZK.pdf | 676KB | download |