期刊论文详细信息
Laryngoscope Investigative Otolaryngology
Decoding phonation with artificial intelligence (D e P AI): Proof of concept
article
Maria E. Powell1  Marcelino Rodriguez Cancio2  David Young1  William Nock3  Beshoy Abdelmessih1  Amy Zeller1  Irvin Perez Morales4  Peng Zhang3  C. Gaelyn Garrett1  Douglas Schmidt3  Jules White3  Alexander Gelbard1 
[1] Vanderbilt Bill Wilkerson Center for Otolaryngology, Vanderbilt University Medical Center;Department of Information Technology, Vanderbilt University;Department of Electrical Engineering and Computer Science, Vanderbilt University;Center of Research in Computational and Numerical Methods in Engineering, Central University Marta Abreu of Las Villas;University of Brasília
关键词: Voice disorders;    detection;    acoustic analysis;    convolutional neural network;    classification.;   
DOI  :  10.1002/lio2.259
学科分类:环境科学(综合)
来源: Wiley
PDF
【 摘 要 】

Objective: Acoustic analysis of voice has the potential to expedite detection and diagnosis of voice disorders. Applying an image-based, neural-network approach to analyzing the acoustic signal may be an effective means for detecting and differentially diagnosing voice disorders. The purpose of this study is to provide a proof-of-concept that embedded data within human phonation can be accurately and efficiently decoded with deep learning neural network analysis to differentiate between normal and disordered voices. Methods: Acoustic recordings from 10 vocally-healthy speakers, as well as 70 patients with one of seven voice disorders (n = 10 per diagnosis), were acquired from a clinical database. Acoustic signals were converted into spectrograms and used to train a convolutional neural network developed with the Keras library. The network architecture was trained separately for each of the seven diagnostic categories. Binary classification tasks (ie, to classify normal vs. disordered) were performed for each of the seven diagnostic categories. All models were validated using the 10-fold cross-validation technique. Results: Binary classification averaged accuracies ranged from 58% to 90%. Models were most accurate in their classification of adductor spasmodic dysphonia, unilateral vocal fold paralysis, vocal fold polyp, polypoid corditis, and recurrent respiratory papillomatosis. Despite a small sample size, these findings are consistent with previously published data utilizing deep neural networks for classification of voice disorders. Conclusion: Promising preliminary results support further study of deep neural networks for clinical detection and diagnosis of human voice disorders. Current models should be optimized with a larger sample size.

【 授权许可】

CC BY|CC BY-NC-ND   

【 预 览 】
附件列表
Files Size Format View
RO202105310001076ZK.pdf 1156KB PDF download
  文献评价指标  
  下载次数:3次 浏览次数:0次