期刊论文详细信息
BMC Neuroscience
ERP evidence for the recognition of emotional prosody through simulated cochlear implant strategies
Matthias Wittfoth3  Reinhard Dengler3  Andreas Büchner1  Stefan Debener2  Filipa Campos Viola2  Lydia Timm3  Deepashri Agrawal3 
[1] Department of Otolaryngology, Hannover Medical School, Hannover, Germany;Department of Psychology, Carl von Ossietzky Universität, Oldenburg, Germany;Department of Neurology, Hannover Medical School, Hannover, Germany
关键词: Event-related potentials;    Simulations;    Cochlear implants;    Emotional prosody;   
Others  :  1140937
DOI  :  10.1186/1471-2202-13-113
 received in 2012-04-05, accepted in 2012-07-10,  发布年份 2012
PDF
【 摘 要 】

Background

Emotionally salient information in spoken language can be provided by variations in speech melody (prosody) or by emotional semantics. Emotional prosody is essential to convey feelings through speech. In sensori-neural hearing loss, impaired speech perception can be improved by cochlear implants (CIs). Aim of this study was to investigate the performance of normal-hearing (NH) participants on the perception of emotional prosody with vocoded stimuli. Semantically neutral sentences with emotional (happy, angry and neutral) prosody were used. Sentences were manipulated to simulate two CI speech-coding strategies: the Advance Combination Encoder (ACE) and the newly developed Psychoacoustic Advanced Combination Encoder (PACE). Twenty NH adults were asked to recognize emotional prosody from ACE and PACE simulations. Performance was assessed using behavioral tests and event-related potentials (ERPs).

Results

Behavioral data revealed superior performance with original stimuli compared to the simulations. For simulations, better recognition for happy and angry prosody was observed compared to the neutral. Irrespective of simulated or unsimulated stimulus type, a significantly larger P200 event-related potential was observed for happy prosody after sentence onset than the other two emotions. Further, the amplitude of P200 was significantly more positive for PACE strategy use compared to the ACE strategy.

Conclusions

Results suggested P200 peak as an indicator of active differentiation and recognition of emotional prosody. Larger P200 peak amplitude for happy prosody indicated importance of fundamental frequency (F0) cues in prosody processing. Advantage of PACE over ACE highlighted a privileged role of the psychoacoustic masking model in improving prosody perception. Taken together, the study emphasizes on the importance of vocoded simulation to better understand the prosodic cues which CI users may be utilizing.

【 授权许可】

   
2012 Agrawal et al.; licensee BioMed Central Ltd.

【 预 览 】
附件列表
Files Size Format View
20150325154807463.pdf 2133KB PDF download
Figure 3. 58KB Image download
Figure 2. 32KB Image download
Figure 1. 78KB Image download
【 图 表 】

Figure 1.

Figure 2.

Figure 3.

【 参考文献 】
  • [1]Ross ED: The aprosodias. Functional-anatomic organization of the affective components of language in the right hemisphere. Arch Neurol 1981, 38(9):561-569.
  • [2]Murray IR, Arnott JL: Toward the simulation of emotion in synthetic speech: a review of the literature on human vocal emotion. J Acoust Soc Am 1993, 93(2):1097-1108.
  • [3]Schroder C, Mobes J, Schutze M, Szymanowski F, Nager W, Bangert M, Munte TF, Dengler R: Perception of emotional speech in Parkinson's disease. Mov Disord 2006, 21(10):1774-1778.
  • [4]Nikolova ZT, Fellbrich A, Born J, Dengler R, Schroder C: Deficient recognition of emotional prosody in primary focal dystonia. Eur J Neurol 2011, 18(2):329-336.
  • [5]Chee GH, Goldring JE, Shipp DB, Ng AH, Chen JM, Nedzelski JM: Benefits of cochlear implantation in early-deafened adults: the Toronto experience. J Otolaryngol 2004, 33(1):26-31.
  • [6]Kaplan DM, Shipp DB, Chen JM, Ng AH, Nedzelski JM: Early-deafened adult cochlear implant users: assessment of outcomes. J Otolaryngol 2003, 32(4):245-249.
  • [7]Donaldson GS, Nelson DA: Place-pitch sensitivity and its relation to consonant recognition by cochlear implant listeners using the MPEAK and SPEAK speech processing strategies. J Acoust Soc Am 2000, 107(3):1645-1658.
  • [8]Sandmann P, Dillier N, Eichele T, Meyer M, Kegel A, Pascual-Marqui RD, Marcar VL, Jancke L, Debener S: Visual activation of auditory cortex reflects maladaptive plasticity in cochlear implant users. Brain 2012, 135(Pt 2):555-568.
  • [9]Mohr PE, Feldman JJ, Dunbar JL, McConkey-Robbins A, Niparko JK, Rittenhouse RK, Skinner MW: The societal costs of severe to profound hearing loss in the United States. Int J Technol Assess Health Care 2000, 16(4):1120-1135.
  • [10]Shannon RV, Zeng FG, Kamath V, Wygonski J, Ekelid M: Speech recognition with primarily temporal cues. Science 1995, 270(5234):303-304.
  • [11]Buechner A, Brendel M, Krueger B, Frohne-Buchner C, Nogueira W, Edler B, Lenarz T: Current steering and results from novel speech coding strategies. Otol Neurotol 2008, 29(2):203-207.
  • [12]Nogueira W, Vanpoucke F, Dykmans P, De Raeve L, Van Hamme H, Roelens J: Speech recognition technology in CI rehabilitation. Cochlear Implants Int 2010, 11(Suppl 1):449-453.
  • [13]Loizou PC: Signal-processing techniques for cochlear implants. IEEE Eng Med Biol Mag 1999, 18(3):34-46.
  • [14]Nogueira W, Buechner A, Lenarz T, Edler B: A Psychoacoustic "NofM"-type speech coding strategy for cochlear implants. J Appl Signal Process Spec Issue DSP Hear Aids Cochlear Implants Eurasip 2005, 127(18):3044-3059.
  • [15]Lai WK, Dillier N: Investigating the MP3000 coding strategy for music perception. In 11 Jahrestagung der Deutschen Gesellschaft für Audiologie: 2008. Germany: Kiel; 2008:1-4.
  • [16]Weber J, Ruehl S, Buechner A: Evaluation der Sprachverarbeitungsstrategie MP3000 bei Erstanpassung. In 81st Annual Meeting of the German Society of Oto-Rhino-Laryngology, Head and Neck Surgery. Wiesbaden: German Medical Science GMS Publishing House; 2010.
  • [17]Kutas M, Hillyard SA: Event-related brain potentials to semantically inappropriate and surprisingly large words. Biol Psychol 1980, 11(2):99-116.
  • [18]Steinhauer K, Alter K, Friederici AD: Brain potentials indicate immediate use of prosodic cues in natural speech processing. Nat Neurosci 1999, 2(2):191-196.
  • [19]Schapkin SA, Gusev AN, Kuhl J: Categorization of unilaterally presented emotional words: an ERP analysis. Acta Neurobiol Exp (Wars) 2000, 60(1):17-28.
  • [20]Kotz SA, Meyer M, Alter K, Besson M, von Cramon DY, Friederici AD: On the lateralization of emotional prosody: an event-related functional MR investigation. Brain Lang 2003, 86(3):366-376.
  • [21]Pihan H, Altenmuller E, Ackermann H: The cortical processing of perceived emotion: a DC-potential study on affective speech prosody. Neuroreport 1997, 8(3):623-627.
  • [22]Kotz SA, Paulmann S: When emotional prosody and semantics dance cheek to cheek: ERP evidence. Brain Res 2007, 1151:107-118.
  • [23]Hillyard SA, Picton TW: On and off components in the auditory evoked potential. Percept Psychophys 1978, 24(5):391-398.
  • [24]Rosburg T, Boutros NN, Ford JM: Reduced auditory evoked potential component N100 in schizophrenia–a critical review. Psychiatr Res 2008, 161(3):259-274.
  • [25]Anderson L, Shimamura AP: Influences of emotion on context memory while viewing film clips. Am J Psychol 2005, 118(3):323-337.
  • [26]Zeelenberg R, Wagenmakers EJ, Rotteveel M: The impact of emotion on perception: bias or enhanced processing? Psychol Sci 2006, 17(4):287-291.
  • [27]Grandjean D, Sander D, Pourtois G, Schwartz S, Seghier ML, Scherer KR, Vuilleumier P: The voices of wrath: brain responses to angry prosody in meaningless speech. Nat Neurosci 2005, 8(2):145-146.
  • [28]Grandjean D, Sander D, Lucas N, Scherer KR, Vuilleumier P: Effects of emotional prosody on auditory extinction for voices in patients with spatial neglect. Neuropsychologia 2008, 46(2):487-496.
  • [29]Scherer KR: Vocal communication of emotion: a review of research paradigms. Speech Comm 2003, 40:227-256.
  • [30]Luo X, Fu QJ: Frequency modulation detection with simultaneous amplitude modulation by cochlear implant users. J Acoust Soc Am 2007, 122(2):1046-1054.
  • [31]Seither-Preisler A, Patterson R, Krumbholz K, Seither S, Lutkenhoner B: Evidence of pitch processing in the N100m component of the auditory evoked field. Hear Res 2006, 213(1–2):88-98.
  • [32]Schirmer A, Kotz SA: Beyond the right hemisphere: brain mechanisms mediating vocal emotional processing. Trends Cogn Sci 2006, 10(1):24-30.
  • [33]Pinheiro AP, Galdo-Alvarez S, Rauber A, Sampaio A, Niznikiewicz M, Goncalves OF: Abnormal processing of emotional prosody in Williams syndrome: an event-related potentials study. Res Dev Disabil 2011, 32(1):133-147.
  • [34]Garcia-Larrea L, Lukaszevicz AC, Mauguiere F: Revisiting the oddball paradigm. Non-target vs. neutral stimuli and the evaluation of ERP attentional effects. Neuropsychologia 1992, 30:723-741.
  • [35]Alain C, Woods DL, Covarrubias D: Activation of duration-sensitive auditory cortical fields in humans. Electroencephalogr Clin Neurophysiol 1997, 104(6):531-539.
  • [36]Picton TW, Goodman WS, Bryce DP: Amplitude of evoked responses to tones of high intensity. Acta Otolaryngol 1970, 70(2):77-82.
  • [37]Meyer M, Baumann S, Jancke L: Electrical brain imaging reveals spatio-temporal dynamics of timbre perception in humans. NeuroImage 2006, 32(4):1510-1523.
  • [38]Shahin A, Bosnyak DJ, Trainor LJ, Roberts LE: Enhancement of neuroplastic P2 and N1c auditory evoked potentials in musicians. J Neurosci 2003, 23(13):5545-5552.
  • [39]Paulmann S, Pell MD, Kotz SA: How aging affects the recognition of emotional speech. Brain Lang 2008, 104(3):262-269.
  • [40]Kotz SA, Meyer M, Paulmann S: Lateralization of emotional prosody in the brain: an overview and synopsis on the impact of study design. Prog Brain Res 2006, 156:285-294.
  • [41]Alter K, Rank E, Kotz SA, Toepel U, Besson M, Schirmer A, Friederici AD: Affective encoding in the speech signal and in event-related brain potentials. Speech Comm 2003, 40:61-70.
  • [42]Johnstone T, van Reekum CM, Oakes TR, Davidson RJ: The voice of emotion: an FMRI study of neural responses to angry and happy vocal expressions. Soc Cogn Affect Neurosci 2006, 1(3):242-249.
  • [43]Spreckelmeyer KN, Kutas M, Urbach T, Altenmuller E, Munte TF: Neural processing of vocal emotion and identity. Brain Cogn 2009, 69(1):121-126.
  • [44]Lang SF, Nelson CA, Collins PF: Event-related potentials to emotional and neutral stimuli. J Clin Exp Neuropsychol 1990, 12(6):946-958.
  • [45]Qin MK, Oxenham AJ: Effects of simulated cochlear-implant processing on speech reception in fluctuating maskers. J Acoust Soc Am 2003, 114(1):446-454.
  • [46]Laneau J, Wouters J, Moonen M: Relative contributions of temporal and place pitch cues to fundamental frequency discrimination in cochlear implantees. J Acoust Soc Am 2004, 116(6):3606-3619.
  • [47]Drennan WR, Rubinstein JT: Music perception in cochlear implant users and its relationship with psychophysical capabilities. J Rehabil Res Dev 2008, 45(5):779-789.
  • [48]Wittfoth M, Schroder C, Schardt DM, Dengler R, Heinze HJ, Kotz SA: On emotional conflict: interference resolution of happy and angry prosody reveals valence-specific effects. Cereb Cortex 2010, 20(2):383-392.
  • [49]Swanson B, Mauch H: Nucleus MATLAB Toolbox Software User Manual. 2006.
  • [50]Boersma P, Weenink D: Praat: doing phonetics by computer. 2005.
  • [51]Jasper H: Progress and problems in brain research. J Mt Sinai Hosp N Y 1958, 25(3):244-253.
  • [52]Delorme A, Makeig S: EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Meth 2004, 134(1):9-21.
  • [53]Debener S, Thorne J, Schneider TR, Viola FC: Using ICA for the analysis of multi-channel EEG data. In Simultaneous EEG and fMRI Edited by Debener MUS. New York, NY: Oxford University Press; 2010:121-135.
  • [54]Viola FC, Thorne J, Edmonds B, Schneider T, Eichele T, Debener S: Semi-automatic identification of independent components representing EEG artifact. Clin Neurophysiol 2009, 120(5):868-877.
  文献评价指标  
  下载次数:42次 浏览次数:21次