期刊论文详细信息
Applied Sciences
Evaluating Explainable Artificial Intelligence for X-ray Image Analysis
Miquel Miró-Nicolau1  Gabriel Moyà-Alcover1  Antoni Jaume-i-Capó1 
[1]Computer Graphics and Vision and AI Group (UGiVIA), Research Institute of Health Sciences (IUNICS), Department of Mathematics and Computer Science, Universitat de les Illes Balears, 07122 Palma, Spain
关键词: explainable artificial intelligence;    artificial intelligence;    X-ray;    decision support systems;    neural networks;    image analysis;   
DOI  :  10.3390/app12094459
来源: DOAJ
【 摘 要 】
The lack of justification of the results obtained by artificial intelligence (AI) algorithms has limited their usage in the medical context. To increase the explainability of the existing AI methods, explainable artificial intelligence (XAI) is proposed. We performed a systematic literature review, based on the guidelines proposed by Kitchenham and Charters, of studies that applied XAI methods in X-ray-image-related tasks. We identified 141 studies relevant to the objective of this research from five different databases. For each of these studies, we assessed the quality and then analyzed them according to a specific set of research questions. We determined two primary purposes for X-ray images: the detection of bone diseases and lung diseases. We found that most of the AI methods used were based on a CNN. We identified the different techniques to increase the explainability of the models and grouped them depending on the kind of explainability obtained. We found that most of the articles did not evaluate the quality of the explainability obtained, causing problems of confidence in the explanation. Finally, we identified the current challenges and future directions of this subject and provide guidelines to practitioners and researchers to improve the limitations and the weaknesses that we detected.
【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:1次