期刊论文详细信息
Sensors
How Validation Methodology Influences Human Activity Recognition Mobile Systems
Hendrio Bragança1  Eduardo Souto1  Juan G. Colonna1  Horácio A. B. F. Oliveira1 
[1] Institute of Computing, Federal University of Amazonas, Manaus 69067-005, Brazil;
关键词: human activity recognition;    validation methodology;    leave-one-subject-out cross-validation;    explainable methods;    Shapley additive explanations;    machine learning;   
DOI  :  10.3390/s22062360
来源: DOAJ
【 摘 要 】

In this article, we introduce explainable methods to understand how Human Activity Recognition (HAR) mobile systems perform based on the chosen validation strategies. Our results introduce a new way to discover potential bias problems that overestimate the prediction accuracy of an algorithm because of the inappropriate choice of validation methodology. We show how the SHAP (Shapley additive explanations) framework, used in literature to explain the predictions of any machine learning model, presents itself as a tool that can provide graphical insights into how human activity recognition models achieve their results. Now it is possible to analyze which features are important to a HAR system in each validation methodology in a simplified way. We not only demonstrate that the validation procedure k-folds cross-validation (k-CV), used in most works to evaluate the expected error in a HAR system, can overestimate by about 13% the prediction accuracy in three public datasets but also choose a different feature set when compared with the universal model. Combining explainable methods with machine learning algorithms has the potential to help new researchers look inside the decisions of the machine learning algorithms, avoiding most times the overestimation of prediction accuracy, understanding relations between features, and finding bias before deploying the system in real-world scenarios.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次