期刊论文详细信息
NEUROCOMPUTING 卷:456
Uncertainty quantification in extreme learning machine: Analytical developments, variance estimates and confidence intervals
Article
Guignard, Fabian1  Amato, Federico1  Kanevski, Mikhail1 
[1] Univ Lausanne, Fac Geosci & Environm, Inst Earth Surface Dynam, Lausanne, Switzerland
关键词: Extreme learning machine;    Standard error;    Model variance;    Confidence interval;    Uncertainty quantification;    Regularization;   
DOI  :  10.1016/j.neucom.2021.04.027
来源: Elsevier
PDF
【 摘 要 】

Uncertainty quantification is crucial to assess prediction quality of a machine learning model. In the case of Extreme Learning Machines (ELM), most methods proposed in the literature make strong assumptions on the data, ignore the randomness of input weights or neglect the bias contribution in confidence interval estimations. This paper presents novel estimations that overcome these constraints and improve the understanding of ELM variability. Analytical derivations are provided under general assumptions, supporting the identification and the interpretation of the contribution of different variability sources. Under both homoskedasticity and heteroskedasticity, several variance estimates are proposed, investigated, and numerically tested, showing their effectiveness in replicating the expected variance behaviours. Finally, the feasibility of confidence intervals estimation is discussed by adopting a critical approach, hence raising the awareness of ELM users concerning some of their pitfalls. The paper is accompanied with a scikit-learn compatible Python library enabling efficient computation of all estimates discussed herein. (c) 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_neucom_2021_04_027.pdf 1112KB PDF download
  文献评价指标  
  下载次数:0次 浏览次数:0次