期刊论文详细信息
Frontiers in Psychology
Multi-Talker Speech Promotes Greater Knowledge-Based Spoken Mandarin Word Recognition in First and Second Language Listeners
article
Seth Wiener1  Chao-Yang Lee2 
[1] Processing and Pedagogy Lab, Department of Modern Languages, Carnegie Mellon University, United States;Speech Processing Lab, Communication Sciences and Disorders, Ohio University, United States
关键词: gating;    spoken word recognition;    Mandarin Chinese;    talker variability;    second language acquisition;    distributional learning;    lexical tone;   
DOI  :  10.3389/fpsyg.2020.00214
学科分类:社会科学、人文和艺术(综合)
来源: Frontiers
PDF
【 摘 要 】

Spoken word recognition involves a perceptual tradeoff between the reliance on the incoming acoustic signal and knowledge about likely sound categories and their co-occurrences as words. This study examined how adult second language (L2) learners navigate between acoustic-based and knowledge-based spoken word recognition when listening to highly variable, multi-talker truncated speech, and whether this perceptual tradeoff changes as L2 listeners gradually become more proficient in their L2 after multiple months of structured classroom learning. First language (L1) Mandarin Chinese listeners and L1 English-L2 Mandarin adult listeners took part in a gating experiment. The L2 listeners were tested twice – once at the start of their intermediate/advanced L2 language class and again 2 months later. L1 listeners were only tested once. Participants were asked to identify syllable-tone words that varied in syllable token frequency (high/low according to a spoken word corpus) and syllable-conditioned tonal probability (most probable/least probable in speech given the syllable). The stimuli were recorded by 16 different talkers and presented at eight gates ranging from onset-only (gate 1) through onset +40 ms increments (gates 2 through 7) to the full word (gate 8). Mixed-effects regression modeling was used to compare performance to our previous study which used single-talker stimuli ( Wiener et al., 2019 ). The results indicated that multi-talker speech caused both L1 and L2 listeners to rely greater on knowledge-based processing of tone. L1 listeners were able to draw on distributional knowledge of syllable-tone probabilities in early gates and switch to predominantly acoustic-based processing when more of the signal was available. In contrast, L2 listeners, with their limited experience with talker range normalization, were less able to effectively transition from probability-based to acoustic-based processing. Moreover, for the L2 listeners, the reliance on such distributional information for spoken word recognition appeared to be conditioned by the nature of the acoustic signal. Single-talker speech did not result in the same pattern of probability-based tone processing, suggesting that knowledge-based processing of L2 speech may only occur under certain acoustic conditions, such as multi-talker speech.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO202108170003567ZK.pdf 1087KB PDF download
  文献评价指标  
  下载次数:30次 浏览次数:1次