期刊论文详细信息
Applied Sciences
Explaining Bad Forecasts in Global Time Series Models
Klemen Kenda1  Dunja Mladenić1  Elena Trajkova1  Blaž Fortuna1  Jože Rožanec1 
[1] Jožef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia;
关键词: explainable artificial intelligence;    XAI;    time series forecasting;    global time series models;    machine learning;    artificial intelligence;   
DOI  :  10.3390/app11199243
来源: DOAJ
【 摘 要 】

While increasing empirical evidence suggests that global time series forecasting models can achieve better forecasting performance than local ones, there is a research void regarding when and why the global models fail to provide a good forecast. This paper uses anomaly detection algorithms and explainable artificial intelligence (XAI) to answer when and why a forecast should not be trusted. To address this issue, a dashboard was built to inform the user regarding (i) the relevance of the features for that particular forecast, (ii) which training samples most likely influenced the forecast outcome, (iii) why the forecast is considered an outlier, and (iv) provide a range of counterfactual examples to understand how value changes in the feature vector can lead to a different outcome. Moreover, a modular architecture and a methodology were developed to iteratively remove noisy data instances from the train set, to enhance the overall global time series forecasting model performance. Finally, to test the effectiveness of the proposed approach, it was validated on two publicly available real-world datasets.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次