AIMS Medical Science | |
Decision trees and multi-level ensemble classifiers for neurological diagnostics | |
article | |
Herbert F. Jelinek1  Jemal H. Abawajy3  Andrei V. Kelarev1  Morshed U. Chowdhury3  Andrew Stranieri1  | |
[1] Centre for Research in Complex Systems and School of Community Health, Charles Sturt University;Biomedical Engineering, Khalifa University of Science, Technology and Research (KUSTAR), United Arab Emirates;School of Information Technology, Deakin University;Centre for Informatics and Applied Optimisation, School of Science, Information Technology and Engineering, Federation University | |
关键词: diabetes; cardiac autonomic neuropathy; neurology; data mining; decision trees; ensemble classifiers; knowledge discovery; ; | |
DOI : 10.3934/medsci.2014.1.1 | |
来源: American Institute of Mathematical Sciences | |
【 摘 要 】
Cardiac autonomic neuropathy (CAN) is a well known complication of diabetes leading to impaired regulation of blood pressure and heart rate, and increases the risk of cardiac associated mortality of diabetes patients. The neurological diagnostics of CAN progression is an important problem that is being actively investigated. This paper uses data collected as part of a large and unique Diabetes Screening Complications Research Initiative (DiScRi) in Australia with data from numerous tests related to diabetes to classify CAN progression. The present paper is devoted to recent experimental investigations of the effectiveness of applications of decision trees, ensemble classifiers and multi-level ensemble classifiers for neurological diagnostics of CAN. We present the results of experiments comparing the effectiveness of ADTree, J48, NBTree, RandomTree, REPTree and SimpleCart decision tree classifiers. Our results show that SimpleCart was the most effective for the DiScRi data set in classifying CAN. We also investigated and compared the effectiveness of AdaBoost, Bagging, MultiBoost, Stacking, Decorate, Dagging, and Grading, based on Ripple Down Rules as examples of ensemble classifiers. Further, we investigated the effectiveness of these ensemble methods as a function of the base classifiers, and determined that Random Forest performed best as a base classifier, and AdaBoost, Bagging and Decorate achieved the best outcomes as meta-classifiers in this setting. Finally, we investigated the meta-classifiers that performed best in their ability to enhance the performance further within the framework of a multi-level classification paradigm. Experimental results show that the multi-level paradigm performed best when Bagging and Decorate were combined in the construction of a multi-level ensemble classifier.
【 授权许可】
CC BY
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO202106050000868ZK.pdf | 338KB | download |