期刊论文详细信息
NEUROCOMPUTING 卷:339
Learning from data streams using kernel least-mean-square with multiple kernel-sizes and adaptive step-size
Article
Garcia-Vega, Sergio1  Zeng, Xiao-Jun1  Keane, John1 
[1] Univ Manchester, Sch Comp Sci, Kilburn Bldg,Oxford Rd, Manchester M13 9PL, Lancs, England
关键词: Learning from data streams;    Sequence prediction;    Kernel least-mean-square;    Kernel-size;    Step-size;   
DOI  :  10.1016/j.neucom.2019.01.055
来源: Elsevier
PDF
【 摘 要 】

A learning task is sequential if its data samples become available over time; kernel adaptive filters (KAFs) are sequential learning algorithms. There are three main challenges in KAFs: (1) selection of an appropriate Mercer kernel; (2) the lack of an effective method to determine kernel-sizes in an online learning context; (3) how to tune the step-size parameter. This work introduces a framework for online prediction that addresses the latter two of these open challenges. The kernel-sizes, unlike traditional KAF formulations, are both created and updated in an online sequential way. Further, to improve convergence time, we propose an adaptive step-size strategy that minimizes the mean-square-error (MSE) using a stochastic gradient algorithm. The proposed framework has been tested on three real-world data sets; results show both faster convergence to relatively low values of MSE and better accuracy when compared with KAF-based methods, long short-term memory, and recurrent neural networks. (C) 2019 Elsevier B.V. All rights reserved.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_neucom_2019_01_055.pdf 850KB PDF download
  文献评价指标  
  下载次数:3次 浏览次数:0次