| BioMedical Engineering OnLine | |
| Recognizing pathology of renal tumor from macroscopic cross-section image by deep learning | |
| Research | |
| Weihong Yang1  Jing Yang2  Wenqiang Zhang3  Xiaoxu Yuan3  Jing Chu3  Chao Jiang4  Zefang Lin5  | |
| [1] Department of Medical Equipment Engineering, Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, China;Department of Pathology, Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, China;Department of Urology, Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, China;Nursing Department, Guizhou Aerospace Hospital, Zunyi, China;Zhuhai Interventional Medical Center, Zhuhai Precision Medical Center, Zhuhai People’s Hospital, Zhuhai Hospital Affiliated with Jinan University, Jinan University, Zhuhai, China; | |
| 关键词: Renal tumor; Deep learning; Classification; | |
| DOI : 10.1186/s12938-023-01064-4 | |
| received in 2022-10-17, accepted in 2023-01-09, 发布年份 2023 | |
| 来源: Springer | |
PDF
|
|
【 摘 要 】
ObjectivesThis study aims to develop and evaluate the deep learning-based classification model for recognizing the pathology of renal tumor from macroscopic cross-section image.MethodsA total of 467 pathology-confirmed patients who received radical nephrectomy or partial nephrectomy were retrospectively enrolled. The experiment of distinguishing malignant and benign renal tumor are conducted followed by performing the multi-subtypes classification models for recognizing four subtypes of benign tumor and four subtypes of malignant tumors, respectively. The classification models used the same backbone networks which are based on the convolutional neural network (CNN), including EfficientNet-B4, ResNet-18, and VGG-16. The performance of the classification models was evaluated by area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and accuracy. Besides, we performed the quantitative comparison among these CNN models.ResultsFor the model to differentiate the malignant tumor from the benign tumor, three CNN models all obtained relatively satisfactory performance and the highest AUC was achieved by the ResNet-18 model (AUC = 0.9226). There is not statistically significance between EfficientNet-B4 and ResNet-18 architectures and both of them are significantly statistically better than the VGG-16 model. The micro-averaged AUC, macro-averaged sensitivity, macro-averaged specificity, and micro-averaged accuracy for the VGG-16 model to distinguish the malignant tumor subtypes achieved 0.9398, 0.5774, 0.8660, and 0.7917, respectively. The performance of the EfficientNet-B4 is not better than that of VGG-16 in terms of micro-averaged AUC except for other metrics. For the models to recognize the benign tumor subtypes, the EfficientNet-B4 ranked the best performance, but had no significantly statistical difference with other two models with respect to micro-averaged AUC.ConclusionsThe classification results were relatively satisfactory, which showed the potential for clinical application when analyzing the renal tumor macroscopic cross-section images. Automatically distinguishing the malignant tumor from benign tumor and identifying the subtypes pathology of renal tumor could make the patient-management process more efficient.
【 授权许可】
CC BY
© The Author(s) 2023
【 预 览 】
| Files | Size | Format | View |
|---|---|---|---|
| RO202305117543184ZK.pdf | 3963KB | ||
| 41116_2022_35_Article_IEq23.gif | 1KB | Image | |
| Fig. 6 | 784KB | Image | |
| MediaObjects/12888_2023_4529_MOESM1_ESM.jpg | 1350KB | Other | |
| 12936_2022_4438_Article_IEq9.gif | 1KB | Image | |
| 41116_2022_35_Article_IEq30.gif | 1KB | Image | |
| 41116_2022_35_Article_IEq33.gif | 1KB | Image | |
| 41116_2022_35_Article_IEq35.gif | 1KB | Image | |
| Fig. 3 | 215KB | Image | |
| 41116_2022_35_Article_IEq39.gif | 1KB | Image | |
| Fig. 1 | 1380KB | Image | |
| 41116_2022_35_Article_IEq43.gif | 1KB | Image | |
| 41116_2022_35_Article_IEq45.gif | 1KB | Image | |
| 41116_2022_35_Article_IEq47.gif | 1KB | Image | |
| MediaObjects/12936_2022_4438_MOESM3_ESM.xlsx | 56KB | Other | |
| Fig. 4 | 172KB | Image | |
| Fig. 5 | 236KB | Image | |
| Fig. 7 | 169KB | Image | |
| 40249_2022_1049_Article_IEq33.gif | 1KB | Image | |
| 40249_2022_1049_Article_IEq34.gif | 1KB | Image | |
| 40249_2022_1049_Article_IEq35.gif | 1KB | Image | |
| Fig. 10 | 171KB | Image |
【 图 表 】
Fig. 10
40249_2022_1049_Article_IEq35.gif
40249_2022_1049_Article_IEq34.gif
40249_2022_1049_Article_IEq33.gif
Fig. 7
Fig. 5
Fig. 4
41116_2022_35_Article_IEq47.gif
41116_2022_35_Article_IEq45.gif
41116_2022_35_Article_IEq43.gif
Fig. 1
41116_2022_35_Article_IEq39.gif
Fig. 3
41116_2022_35_Article_IEq35.gif
41116_2022_35_Article_IEq33.gif
41116_2022_35_Article_IEq30.gif
12936_2022_4438_Article_IEq9.gif
Fig. 6
41116_2022_35_Article_IEq23.gif
【 参考文献 】
- [1]
- [2]
- [3]
- [4]
- [5]
- [6]
- [7]
- [8]
- [9]
- [10]
- [11]
- [12]
- [13]
- [14]
- [15]
- [16]
- [17]
- [18]
- [19]
- [20]
- [21]
- [22]
- [23]
- [24]
- [25]
- [26]
- [27]
- [28]
- [29]
- [30]
- [31]
- [32]
- [33]
- [34]
- [35]
- [36]
- [37]
- [38]
- [39]
- [40]
- [41]
- [42]
- [43]
- [44]
- [45]
PDF