Jisuanji kexue | |
Survey of Research Progress on Adversarial Examples in Images | |
CHEN Meng-xuan, ZHANG Zhen-yong, JI Shou-ling, WEI Gui-yi, SHAO Jun1  | |
[1] 1 School of Computer and Information Engineering,Zhejiang Gongshang University,Hangzhou 310018,China< | |
关键词: deep learning|image field|adversarial examples|adversarial attacks|defense methods|physical world; | |
DOI : 10.11896/jsjkx.210800087 | |
来源: DOAJ |
【 摘 要 】
With the development of deep learning theory,deep neural network has made a series of breakthrough progress and has been widely applied in various fields.Among them,applications in the image field such as image classification are the most popular.However,researchsuggests that deep neural network has many security risks,especially the threat from adversarial examples,which seriously hinder the application of image classification.To address this challenge,many research efforts have recently been dedicated to adversarial examples in images,and a large number of research results have come out.This paperfirst introduces the relative concepts and terms of adversarial examples in images,reviews the adversarial attack methodsand defense me-thods based on the existing research results.In particular,it classifies them according tothe attacker's ability and the train of thought in defense methods.This paperalso analyzes the characteristics and the connections of different categories.Secondly,it briefly describes the adversarial attacks in the physical world.In the end,itdiscussesthe challenges of adversarial examples in images and the potential future research directions.
【 授权许可】
Unknown