期刊论文详细信息
PATTERN RECOGNITION 卷:101
Towards explaining anomalies: A deep Taylor decomposition of one-class models
Article
Kauffmann, Jacob1  Mueller, Klaus-Robert1,2,3  Montavon, Gregoire1 
[1] Tech Univ Berlin, Dept Elect Engn & Comp Sci, D-10587 Berlin, Germany
[2] Korea Univ, Dept Brain & Cognit Engn, Seoul 02841, South Korea
[3] Max Planck Inst Informat, Stuhlsatzenhausweg, D-66123 Saarbrucken, Germany
关键词: Outlier detection;    Explainable machine learning;    Deep Taylor decomposition;    Kernel machines;    Unsupervised learning;   
DOI  :  10.1016/j.patcog.2020.107198
来源: Elsevier
PDF
【 摘 要 】

Detecting anomalies in the data is a common machine learning task, with numerous applications in the sciences and industry. In practice, it is not always sufficient to reach high detection accuracy, one would also like to be able to understand why a given data point has been predicted to be anomalous. We propose a principled approach for one-class SVMs (OC-SVM), that draws on the novel insight that these models can be rewritten as distance/pooling neural networks. This 'neuralization' step lets us apply deep Taylor decomposition (DTD), a methodology that leverages the model structure in order to quickly and reliably explain decisions in terms of input features. The proposed method (called 'OC-DTD') is applicable to a number of common distance-based kernel functions, and it outperforms baselines such as sensitivity analysis, distance to nearest neighbor, or edge detection. (C) 2020 The Authors. Published by Elsevier Ltd.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_patcog_2020_107198.pdf 2815KB PDF download
  文献评价指标  
  下载次数:2次 浏览次数:0次