期刊论文详细信息
NEUROCOMPUTING 卷:370
A simple and efficient architecture for trainable activation functions
Article
Apicella, Andrea1  Isgro, Francesco1  Prevete, Roberto1 
[1] Univ Napoli Federico II Italy, Dipartimento Ingn Elettr & Tecnol Informaz, Naples, Italy
关键词: Neural networks;    Machine learning;    Activation functions;    Trainable activation functions;   
DOI  :  10.1016/j.neucom.2019.08.065
来源: Elsevier
PDF
【 摘 要 】

Automatically learning the best activation function for the task is an active topic in neural network research. At the moment, despite promising results, it is still challenging to determine a method for learning an activation function that is, at the same time, theoretically simple and easy to implement. Moreover, most of the methods proposed so far introduce new parameters or adopt different learning techniques. In this work, we propose a simple method to obtain a trained activation function which adds to the neural network local sub-networks with a small number of neurons. Experiments show that this approach could lead to better results than using a pre-defined activation function, without introducing the need to learn a large number of additional parameters. (C) 2019 Elsevier B.V. All rights reserved.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_neucom_2019_08_065.pdf 2588KB PDF download
  文献评价指标  
  下载次数:13次 浏览次数:3次