Frontiers in Psychology | |
Multisensory integration, learning, and the predictive coding hypothesis | |
Nicholas Altieri1  | |
关键词: predictive coding; Bayesian inference; audiovisual speech integration; EEG; parallel models; | |
DOI : 10.3389/fpsyg.2014.00257 | |
学科分类:心理学(综合) | |
来源: Frontiers | |
![]() |
【 摘 要 】
The multimodal nature of perception has generated several questions of importance pertaining to the encoding, learning, and retrieval of linguistic representations (e.g., Summerfield, 1987; Altieri et al., 2011; van Wassenhove, 2013). Historically, many theoretical accounts of speech perception have been driven by descriptions of auditory encoding; this makes sense because normal-hearing listeners rely predominantly on the auditory signal. However, from both evolutionary and empirical standpoints, comprehensive neurobiological accounts of speech perception must account for interactions across sensory modalities and the interplay of cross-modal and articulatory representations. These include auditory, visual, and somatosensory modalities.
【 授权许可】
CC BY
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO201901228175286ZK.pdf | 368KB | ![]() |