Cancers | 卷:13 |
Novel Transfer Learning Approach for Medical Imaging with Limited Labeled Data | |
Omran Al-Shamma1  MohammedA. Fadhel2  AmjadJ. Humaidi3  J. Santamaría4  Muthana Al-Amidie5  Ahmed Al-Asadi5  Ye Duan5  Jinglan Zhang6  Laith Alzubaidi6  | |
[1] AlNidhal Campus, University of Information Technology & Communications, Baghdad 10001, Iraq; | |
[2] College of Computer Science and Information Technology, University of Sumer, Thi Qar 64005, Iraq; | |
[3] Control and Systems Engineering Department, University of Technology, Baghdad 10001, Iraq; | |
[4] Department of Computer Science, University of Jaén, 23071 Jaén, Spain; | |
[5] Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211, USA; | |
[6] School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000, Australia; | |
关键词: deep learning; transfer learning; medical image analysis; convolution neural network (CNN); machine learning; | |
DOI : 10.3390/cancers13071590 | |
来源: DOAJ |
【 摘 要 】
Deep learning requires a large amount of data to perform well. However, the field of medical image analysis suffers from a lack of sufficient data for training deep learning models. Moreover, medical images require manual labeling, usually provided by human annotators coming from various backgrounds. More importantly, the annotation process is time-consuming, expensive, and prone to errors. Transfer learning was introduced to reduce the need for the annotation process by transferring the deep learning models with knowledge from a previous task and then by fine-tuning them on a relatively small dataset of the current task. Most of the methods of medical image classification employ transfer learning from pretrained models, e.g., ImageNet, which has been proven to be ineffective. This is due to the mismatch in learned features between the natural image, e.g., ImageNet, and medical images. Additionally, it results in the utilization of deeply elaborated models. In this paper, we propose a novel transfer learning approach to overcome the previous drawbacks by means of training the deep learning model on large unlabeled medical image datasets and by next transferring the knowledge to train the deep learning model on the small amount of labeled medical images. Additionally, we propose a new deep convolutional neural network (DCNN) model that combines recent advancements in the field. We conducted several experiments on two challenging medical imaging scenarios dealing with skin and breast cancer classification tasks. According to the reported results, it has been empirically proven that the proposed approach can significantly improve the performance of both classification scenarios. In terms of skin cancer, the proposed model achieved an F1-score value of 89.09% when trained from scratch and 98.53% with the proposed approach. Secondly, it achieved an accuracy value of 85.29% and 97.51%, respectively, when trained from scratch and using the proposed approach in the case of the breast cancer scenario. Finally, we concluded that our method can possibly be applied to many medical imaging problems in which a substantial amount of unlabeled image data is available and the labeled image data is limited. Moreover, it can be utilized to improve the performance of medical imaging tasks in the same domain. To do so, we used the pretrained skin cancer model to train on feet skin to classify them into two classes—either normal or abnormal (diabetic foot ulcer (DFU)). It achieved an F1-score value of 86.0% when trained from scratch, 96.25% using transfer learning, and 99.25% using double-transfer learning.
【 授权许可】
Unknown