Frontiers in Oncology | |
A novel framework of multiclass skin lesion recognition from dermoscopic images using deep learning and explainable AI | |
Oncology | |
Jamel Baili1  Naveed Ahmad2  Jamal Hussain Shah2  Muhammad Attique Khan3  Ye Jin Kim4  Jae-Hyuk Cha4  Ghulam Jillani Ansari5  Usman Tariq6  | |
[1] College of Computer Science, King Khalid University, Abha, Saudi Arabia;Department of Computer Science, COMSATS University Islamabad, Wah Cantt, Pakistan;Department of Computer Science, HITEC University, Taxila, Pakistan;Department of Informatics, University of Leicester, Leicester, United Kingdom;Department of Computer Science, Hanyang University, Seoul, Republic of Korea;Department of Computer Science, University of Education, Lahore, Pakistan;Department of Management Information Systems, CoBA, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia; | |
关键词: dermoscopic images; skin cancer; deep features; explainable AI; feature selection; | |
DOI : 10.3389/fonc.2023.1151257 | |
received in 2023-01-31, accepted in 2023-05-19, 发布年份 2023 | |
来源: Frontiers | |
【 摘 要 】
Skin cancer is a serious disease that affects people all over the world. Melanoma is an aggressive form of skin cancer, and early detection can significantly reduce human mortality. In the United States, approximately 97,610 new cases of melanoma will be diagnosed in 2023. However, challenges such as lesion irregularities, low-contrast lesions, intraclass color similarity, redundant features, and imbalanced datasets make improved recognition accuracy using computerized techniques extremely difficult. This work presented a new framework for skin lesion recognition using data augmentation, deep learning, and explainable artificial intelligence. In the proposed framework, data augmentation is performed at the initial step to increase the dataset size, and then two pretrained deep learning models are employed. Both models have been fine-tuned and trained using deep transfer learning. Both models (Xception and ShuffleNet) utilize the global average pooling layer for deep feature extraction. The analysis of this step shows that some important information is missing; therefore, we performed the fusion. After the fusion process, the computational time was increased; therefore, we developed an improved Butterfly Optimization Algorithm. Using this algorithm, only the best features are selected and classified using machine learning classifiers. In addition, a GradCAM-based visualization is performed to analyze the important region in the image. Two publicly available datasets—ISIC2018 and HAM10000—have been utilized and obtained improved accuracy of 99.3% and 91.5%, respectively. Comparing the proposed framework accuracy with state-of-the-art methods reveals improved and less computational time.
【 授权许可】
Unknown
Copyright © 2023 Ahmad, Shah, Khan, Baili, Ansari, Tariq, Kim and Cha
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO202310101990987ZK.pdf | 17778KB | download |