期刊论文详细信息
Frontiers in Psychology
Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children
Mélanie Havy1 
关键词: audio-visual speech perception;    word-learning;    cross-modal recognition;    lexical representation;    child development;   
DOI  :  10.3389/fpsyg.2017.02122
学科分类:心理学(综合)
来源: Frontiers
PDF
【 摘 要 】

From the very first moments of their lives, infants are able to link specific movements of the visual articulators to auditory speech signals. However, recent evidence indicates that infants focus primarily on auditory speech signals when learning new words. Here, we ask whether 30-month-old children are able to learn new words based solely on visible speech information, and whether information from both auditory and visual modalities is available after learning in only one modality. To test this, children were taught new lexical mappings. One group of children experienced the words in the auditory modality (i.e., acoustic form of the word with no accompanying face). Another group experienced the words in the visual modality (seeing a silent talking face). Lexical recognition was tested in either the learning modality or in the other modality. Results revealed successful word learning in either modality. Results further showed cross-modal recognition following an auditory-only, but not a visual-only, experience of the words. Together, these findings suggest that visible speech becomes increasingly informative for the purpose of lexical learning, but that an auditory-only experience evokes a cross-modal representation of the words.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO201901222617027ZK.pdf 2066KB PDF download
  文献评价指标  
  下载次数:3次 浏览次数:3次