PeerJ Computer Science | |
Generating adversarial examples without specifying a target model | |
Ji Zhang1  Gaoming Yang2  Xingzhu Liang2  Mingwei Li2  Xianjing Fang2  | |
[1] Department of Mathematics and Computing, University of Southern Queensland, Queensland, Australia;School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan, China; | |
关键词: Deep learning; Adversarial example; Generative adversarial networks; Adversarial machine learning; | |
DOI : 10.7717/peerj-cs.702 | |
来源: DOAJ |
【 摘 要 】
Adversarial examples are regarded as a security threat to deep learning models, and there are many ways to generate them. However, most existing methods require the query authority of the target during their work. In a more practical situation, the attacker will be easily detected because of too many queries, and this problem is especially obvious under the black-box setting. To solve the problem, we propose the Attack Without a Target Model (AWTM). Our algorithm does not specify any target model in generating adversarial examples, so it does not need to query the target. Experimental results show that it achieved a maximum attack success rate of 81.78% in the MNIST data set and 87.99% in the CIFAR-10 data set. In addition, it has a low time cost because it is a GAN-based method.
【 授权许可】
Unknown