Diagnostics | |
Automation of Lung Ultrasound Interpretation via Deep Learning for the Classification of Normal versus Abnormal Lung Parenchyma: A Multicenter Study | |
Scott Millington1  Robert Arntfield2  John Basmaji2  Chintan Dave2  Joseph McCauley3  Bennett VanBerlo4  Blake VanBerlo5  Jason Deglint6  Alex Ford7  Benjamin Wu8  Derek Wu9  Jared Tschirhart9  Jordan Ho9  Rushil Chaudhary9  | |
[1] Department of Critical Care Medicine, University of Ottawa, Ottawa, ON K1N 6N5, Canada;Division of Critical Care Medicine, Western University, London, ON N6A 5C1, Canada;Faculty of Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada;Faculty of Engineering, University of Western Ontario, London, ON N6A 5C1, Canada;Faculty of Mathematics, University of Waterloo, Waterloo, ON N2L 3G1, Canada;Faculty of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada;Independent Researcher, London, ON N6A 1L8, Canada;Independent Researcher, London, ON N6C 4P9, Canada;Schulich School of Medicine and Dentistry, Western University, London, ON N6A 5C1, Canada; | |
关键词: deep learning; ultrasound; lung ultrasound; artificial intelligence; automation; imaging; | |
DOI : 10.3390/diagnostics11112049 | |
来源: DOAJ |
【 摘 要 】
Lung ultrasound (LUS) is an accurate thoracic imaging technique distinguished by its handheld size, low-cost, and lack of radiation. User dependence and poor access to training have limited the impact and dissemination of LUS outside of acute care hospital environments. Automated interpretation of LUS using deep learning can overcome these barriers by increasing accuracy while allowing point-of-care use by non-experts. In this multicenter study, we seek to automate the clinically vital distinction between A line (normal parenchyma) and B line (abnormal parenchyma) on LUS by training a customized neural network using 272,891 labelled LUS images. After external validation on 23,393 frames, pragmatic clinical application at the clip level was performed on 1162 videos. The trained classifier demonstrated an area under the receiver operating curve (AUC) of 0.96 (±0.02) through 10-fold cross-validation on local frames and an AUC of 0.93 on the external validation dataset. Clip-level inference yielded sensitivities and specificities of 90% and 92% (local) and 83% and 82% (external), respectively, for detecting the B line pattern. This study demonstrates accurate deep-learning-enabled LUS interpretation between normal and abnormal lung parenchyma on ultrasound frames while rendering diagnostically important sensitivity and specificity at the video clip level.
【 授权许可】
Unknown