IET Signal Processing | |
Comparison of discrete transforms for deep‐neural‐networks‐based speech enhancement | |
Wissam A. Jassim1  Naomi Harte1  | |
[1] Sigmedia ADAPT Centre School of Engineering Trinity College Dublin Dublin Ireland; | |
关键词: discrete transforms; feedforward neural nets; speech enhancement; | |
DOI : 10.1049/sil2.12109 | |
来源: DOAJ |
【 摘 要 】
Abstract In recent studies of speech enhancement, a deep‐learning model is trained to predict clean speech spectra from the known noisy spectra of speech. Rather than using the traditional discrete Fourier transform (DFT), this paper considers other well‐known transforms to generate the speech spectra for deep‐learning‐based speech enhancement. In addition to the DFT, seven different transforms were tested: discrete Cosine transform, discrete Sine transform, discrete Haar transform, discrete Hadamard transform, discrete Tchebichef transform, discrete Krawtchouk transform, and discrete Tchebichef‐Krawtchouk transform. Two deep‐learning architectures were tested: convolutional neural networks (CNN) and fully connected neural networks. Experiments were performed for the NOIZEUS database, and various speech quality and intelligibility measures were adopted for performance evaluation. The quality and intelligibility scores of the enhanced speech demonstrate that discrete Sine transformation is better suited for the front‐end processing with a CNN as it outperformed the DFT in this kind of application. The achieved results demonstrate that combining two or more existing transforms could improve the performance in specific conditions. The tested models suggest that we should not assume that the DFT is optimal in front‐end processing with deep neural networks (DNNs). On this basis, other discrete transformations should be taken into account when designing robust DNN‐based speech processing applications.
【 授权许可】
Unknown