期刊论文详细信息
PATTERN RECOGNITION 卷:88
Deep representation design from deep kernel networks
Article
Jiu, Mingyuan1  Sahbi, Hichem2 
[1] Zhengzhou Univ, Sch Informat Engn, Zhengzhou 450001, Henan, Peoples R China
[2] Sorbonne Univ, UPMC, LIP6, CNRS, F-75005 Paris, France
关键词: Multiple kernel learning;    Kernel design;    Deep networks;    Efficient computation;    Image annotation;   
DOI  :  10.1016/j.patcog.2018.12.005
来源: Elsevier
PDF
【 摘 要 】

Deep kernel learning aims at designing nonlinear combinations of multiple standard elementary kernels by training deep networks. This scheme has proven to be effective, but intractable when handling large-scale datasets especially when the depth of the trained networks increases; indeed, the complexity of evaluating these networks scales quadratically w.r.t. the size of training data and linearly w.r.t. the depth of the trained networks. In this paper, we address the issue of efficient computation in Deep Kernel Networks (DKNs) by designing effective maps in the underlying Reproducing Kernel Hilbert Spaces (RKHS). Given a pretrained DKN, our method builds its associated Deep Map Network (DMN) whose inner product approximates the original network while being far more efficient. The design principle of our method is greedy and achieved layer-wise, by finding maps that approximate DKNs at different (input, intermediate and output) layers. This design also considers an extra fine-tuning step based on unsupervised learning, that further enhances the generalization ability of the trained DMNs. When plugged into SVMs, these DMNs turn out to be as accurate as the underlying DKNs while being at least an order of magnitude faster on large-scale datasets, as shown through extensive experiments on the challenging ImageCLEF, COREL5k benchmarks and the Banana dataset. (C) 2018 Elsevier Ltd. All rights reserved.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_patcog_2018_12_005.pdf 1569KB PDF download
  文献评价指标  
  下载次数:7次 浏览次数:2次