期刊论文详细信息
NEUROCOMPUTING 卷:289
Data driven exploratory attacks on black box classifiers in adversarial domains
Article
Sethi, Tegjyot Singh1  Kantardzic, Mehmed1 
[1] Univ Louisville, Dept Comp Engn & Comp Sci, Data Min Lab, Louisville, KY 40292 USA
关键词: Adversarial machine learning;    Reverse engineering;    Black box attacks;    Classification;    Data diversity;    Cybersecurity;   
DOI  :  10.1016/j.neucom.2018.02.007
来源: Elsevier
PDF
【 摘 要 】

While modern day web applications aim to create impact at the civilization level, they have become vulnerable to adversarial activity, where the next cyber-attack can take any shape and can originate from anywhere. The increasing scale and sophistication of attacks, has prompted the need for a data driven solution, with machine learning forming the core of many cybersecurity systems. Machine learning was not designed with security in mind and the essential assumption of stationarity, requiring that the training and testing data follow similar distributions, is violated in an adversarial domain. In this paper, an adversary's view point of a classification based system, is presented. Based on a formal adversarial model, the Seed-Explore-Exploit framework is presented, for simulating the generation of data driven and reverse engineering attacks on classifiers. Experimental evaluation, on 10 real world datasets and using the Google Cloud Prediction Platform, demonstrates the innate vulnerability of classifiers and the ease with which evasion can be carried out, without any explicit information about the classifier type, the training data or the application domain. The proposed framework, algorithms and empirical evaluation, serve as a white hat analysis of the vulnerabilities, and aim to foster the development of secure machine learning frameworks. (C) 2018 Elsevier B.V. All rights reserved.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_neucom_2018_02_007.pdf 2627KB PDF download
  文献评价指标  
  下载次数:2次 浏览次数:0次