eLife | |
Likelihood approximation networks (LANs) for fast inference of simulation models in cognitive neuroscience | |
Lakshmi N Govindarajan1  Alexander Fengler1  Michael J Frank1  Tony Chen2  | |
[1] Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, United States;Carney Institute for Brain Science, Brown University, Providence, United States;Psychology and Neuroscience Department, Boston College, Chestnut Hill, United States; | |
关键词: approximate bayesian computation; neural networks; sequential sampling models; cognitive neuroscience; computational models; Human; Mouse; Rat; Rhesus macaque; | |
DOI : 10.7554/eLife.65074 | |
来源: eLife Sciences Publications, Ltd | |
【 摘 要 】
In cognitive neuroscience, computational modeling can formally adjudicate between theories and affords quantitative fits to behavioral/brain data. Pragmatically, however, the space of plausible generative models considered is dramatically limited by the set of models with known likelihood functions. For many models, the lack of a closed-form likelihood typically impedes Bayesian inference methods. As a result, standard models are evaluated for convenience, even when other models might be superior. Likelihood-free methods exist but are limited by their computational cost or their restriction to particular inference scenarios. Here, we propose neural networks that learn approximate likelihoods for arbitrary generative models, allowing fast posterior sampling with only a one-off cost for model simulations that is amortized for future inference. We show that these methods can accurately recover posterior parameter distributions for a variety of neurocognitive process models. We provide code allowing users to deploy these methods for arbitrary hierarchical model instantiations without further training.
【 授权许可】
CC BY
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO202106218960362ZK.pdf | 6796KB | download |