学位论文详细信息
Techniques for understanding hearing-impaired perception of consonant cues
Hearing Impaired;Speech;Perception;Normal Hearing;k means;Confusion Matrix;Aigram;3D Deep Search (3dds);Auditory Training;Hearing;Hearing Aids;Consonant
Trevino, Andrea
关键词: Hearing Impaired;    Speech;    Perception;    Normal Hearing;    k means;    Confusion Matrix;    Aigram;    3D Deep Search (3dds);    Auditory Training;    Hearing;    Hearing Aids;    Consonant;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/46591/Andrea_Trevino.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

We examine the cues used for consonant perception and the systematic behavior of normal and hearing-impaired listeners. All stimuli were presented as isolated consonant-vowel tokens, using the vowel /A/. Use of low-context stimuli, such as consonants, aids in minimizing the influence of some variable cognitive abilities (e.g., use of context, memory) across listeners, and focuses on differences in the processing or interpretation of the existing acoustic consonant cues.In a previous study on stop consonants, the 3D Deep Search (3DDS) method for the exploration of the necessary and sufficient cues for normal-hearing speech perception was introduced. Here, this method is used to isolate and analyze the perceptual cues of the naturally produced American English fricatives /S, Z, s, z, f, v, T, D/ in time, frequency, and intensity. The 3DDS analysis labels the perceptual cues of sibilant fricatives /Sa, Za, sa, za/ as a sustained frication noise preceding the vowel onset, with the acoustic cue for both /sa, za/ located between 3.8–7 kHz, and the acoustic cue for both /Sa, Za/ located between 2–4 kHz. The /Sa, Za/ utterances were also found to contain frication components above 4 kHz in natural speech that are unnecessary for correct perception, but can cause listeners to correspondingly hear /sa, za/ when the dominant cue between 2–4 kHz is removed by filtering; such cues are denoted “conflicting cues”. While unvoiced fricatives were observed to generally have a longer frication period than their voiced counterparts, duration of frication was found to be an unreliable cue for the differentiation of voiced from unvoiced fricatives. The wideband amplitude-modulation of the F2 and F3 formants at the pitch frequency F0 was found to be a defining cue for voicing. Similar to previous results with stop consonants, the robustness of fricative consonants to noise was found to be significantly correlated to the intensity of the acoustic cues that were isolated with the 3DDS method.The consonant recognition of 17 ears with sensorineural hearing loss is evaluated for fourteen consonants /p, t, k, f, s, S, b, d, g, v, z, Z, m, n/+/A/, under four speech-weighted noise conditions (0, 6, 12 [dB] SNR, quiet). For a single listener, we find that high errors can exist for a small subset of test stimuli, while performance for the majority of test stimuli can remain at ceiling. We show that hearing-impaired perception can vary across multiple tokens of the same consonant, in both noise-robustness and confusion groups. Within-consonant differences in noise-robustness are related to natural variations in intensity of the consonant cue region. Within-consonant differences in confusion groups entail that an average over multiple tokens of the same consonant results in a larger confusion group than for a single consonant token, causing the listener to appear to behave in a less systematic way. At the token level, hearing-impaired listeners are relatively consistent in their low-noise confusions; confusion groups are restricted to fewer than three confusions, on average. For each consonant token, the same confusion group is consistently observed across a population of hearing-impaired listeners. Quantifying these token differences provides insight into hearing-impaired perception of speech under noisy conditions and characterizes each listener’s hearing impairment.Auditory training programs are currently being explored as a method of improving hearing-impaired speech perception; precise knowledge of a patient’s individual differences in speech perception allows for a more accurately prescribed training program. Re-mapping or variations in the weighting of acoustic cues, due to auditory plasticity, can be examined with the detailed confusion analyses that we have developed. Although the tested tokens are noise-robust and unambiguous for normal-hearing listeners, the subtle natural variations in signal properties can lead to systematic within- consonant differences for hearing-impaired listeners. At the individual token level, a k-means clustering analysis of the confusion data shows that hearing- impaired listeners fall into similar confusion-based groups. Many of the token-dependent confusions that define these groups can also be observed for normal-hearing listeners, under higher noise levels or filtering conditions. These hearing-impaired listener groups correspond to different acoustic-cue weighting schemes, highlighting where auditory training should be most effective.

【 预 览 】
附件列表
Files Size Format View
Techniques for understanding hearing-impaired perception of consonant cues 5611KB PDF download
  文献评价指标  
  下载次数:36次 浏览次数:48次