eLife | |
Value signals guide abstraction during learning | |
Asuka Yamamoto1  Pradyumna Sepulveda2  Aurelio Cortese3  Maryam Hashemzadeh3  Mitsuo Kawato4  Benedetto De Martino5  | |
[1] Institute of Cognitive Neuroscience, University College London, London, United Kingdom;School of Information Science, Nara Institute of Science and Technology, Nara, Japan;Computational Neuroscience Labs, ATR Institute International, Kyoto, Japan;Department of Computing Science, University of Alberta, Edmonton, Canada;Institute of Cognitive Neuroscience, University College London, London, United Kingdom; | |
关键词: reinforcement learning; abstraction; vmpfc; confidence; multivoxel neural reinforcement; valuation; | |
DOI : 10.7554/eLife.68943 | |
来源: DOAJ |
【 摘 要 】
The human brain excels at constructing and using abstractions, such as rules, or concepts. Here, in two fMRI experiments, we demonstrate a mechanism of abstraction built upon the valuation of sensory features. Human volunteers learned novel association rules based on simple visual features. Reinforcement-learning algorithms revealed that, with learning, high-value abstract representations increasingly guided participant behaviour, resulting in better choices and higher subjective confidence. We also found that the brain area computing value signals – the ventromedial prefrontal cortex – prioritised and selected latent task elements during abstraction, both locally and through its connection to the visual cortex. Such a coding scheme predicts a causal role for valuation. Hence, in a second experiment, we used multivoxel neural reinforcement to test for the causality of feature valuation in the sensory cortex, as a mechanism of abstraction. Tagging the neural representation of a task feature with rewards evoked abstraction-based decisions. Together, these findings provide a novel interpretation of value as a goal-dependent, key factor in forging abstract representations.
【 授权许可】
Unknown