期刊论文详细信息
IEEE Access
An Inverse-Free and Scalable Sparse Bayesian Extreme Learning Machine for Classification Problems
Jiahua Luo1  Chi-Man Vong1  Zhenbao Liu2  Chuangquan Chen3 
[1] Department of Computer and Information Science, University of Macau, Macau;School of Civil Aviation, Northwestern Polytechnical University, Xi&x2019;an, China;
关键词: Inverse-free;    quasi-Newton method;    sparse Bayesian extreme learning machine;    large classification;    sparse model;   
DOI  :  10.1109/ACCESS.2021.3089539
来源: DOAJ
【 摘 要 】

Sparse Bayesian Extreme Learning Machine (SBELM) constructs an extremely sparse and probabilistic model with low computational cost and high generalization. However, the update rule of hyperparameters (ARD prior) in SBELM involves using the diagonal elements from the inversion of the covariance matrix with the full training dataset, which raises the following two issues. Firstly, inverting the Hessian matrix may suffer ill-conditioning issues in some cases, which hinders SBELM from converging. Secondly, it may result in the memory-overflow issue with computational memory $O(L^{3})$ ( $L$ : number of hidden nodes) to invert the big covariance matrix for updating the ARD priors. To address these issues, an inverse-free SBELM called QN-SBELM is proposed in this paper, which integrates the gradient-based Quasi-Newton (QN) method into SBELM to approximate the inverse covariance matrix. It takes $O(L^{2})$ computational complexity and is simultaneously scalable to large problems. QN-SBELM was evaluated on benchmark datasets of different sizes. Experimental results verify that QN-SBELM achieves more accurate results than SBELM with a sparser model, and also provides more stable solutions and a great extension to large-scale problems.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次