期刊论文详细信息
NEUROCOMPUTING 卷:218
Disjunctive normal networks
Article
Sajjadi, Mehdi1  Seyedhosseini, Mojtaba1  Tasdizen, Tolga1 
[1] Univ Utah, Dept Elect & Comp Engn, Salt Lake City, UT 84112 USA
关键词: Supervised learning;    Neural networks;    Classification;   
DOI  :  10.1016/j.neucom.2016.08.047
来源: Elsevier
PDF
【 摘 要 】

Artificial neural networks are powerful pattern classifiers. They form the basis of the highly successful and popular Convolutional Networks which offer the state-of-the-art performance on several computer visions tasks. However, in many general and non-vision tasks, neural networks are surpassed by methods such as support vector machines and random forests that are also easier to use and faster to train. One reason is that the backpropagation algorithm, which is used to train artificial neural networks, usually starts from a random weight initialization which complicates the optimization process leading to long training times and increases the risk of stopping in a poor local minima. Several initialization schemes and pre-training methods have been proposed to improve the efficiency and performance of training a neural network. However, this problem arises from the architecture of neural networks. We use the disjunctive normal form and approximate the boolean conjunction operations with products to construct a novel network architecture. The proposed model can be trained by minimizing an error function and it allows an effective and intuitive initialization which avoids poor local minima. We show that the proposed structure provides efficient coverage of the decision space which leads to state-of-the art classification accuracy and fast training times. (C) 2016 Elsevier B.V. All rights reserved.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_neucom_2016_08_047.pdf 1116KB PDF download
  文献评价指标  
  下载次数:7次 浏览次数:1次