会议论文详细信息
3rd Asian Conference on Machine Learning
Support Vector Machines Under Adversarial Label Noise
Battista Biggio battista.biggio@diee.unica.it ; Dept. of Mathematics and Natural Sciences ; Dept. of Mathematics ; Natural Sciences
PID  :  117995
来源: CEUR
PDF
【 摘 要 】
In adversarial classification tasks like spam filtering and intrusion detection, malicious adversaries may manipulate data to thwart the outcome of an automatic analysis. Thus, besides achieving good classification performances, machine learning algorithms have to be robust against adversarial data manipulation to successfully operate in these tasks. While support vector machines (SVMs) have shown to be a very successful approach in classification problems, their effectiveness in adversarial classification tasks has not been extensively investigated yet. In this paper we present a preliminary investigation of the robustness of SVMs against adversarial data manipulation. In particular, we assume that the adversary has control over some training data, and aims to subvert the SVM learning process. Within this assumption, we show that this is indeed possible, and propose a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction.
【 预 览 】
附件列表
Files Size Format View
Support Vector Machines Under Adversarial Label Noise 533KB PDF download
  文献评价指标  
  下载次数:12次 浏览次数:37次