期刊论文详细信息
Information
Adversarial Attacks Impact on the Neural Network Performance and Visual Perception of Data under Attack
Anton Konev1  Yakov Usoltsev1  Alexander Shelupanov1  Evgeny Kostyuchenko1  Balzhit Lodonova1 
[1] Faculty of Security, Tomsk State University of Control Systems and Radioelectronics, 40 Lenin Avenue, 634050 Tomsk, Russia;
关键词: digital signature;    python;    neural networks;    biometric authentication;    adversarial attack;    fast gradient method;   
DOI  :  10.3390/info13020077
来源: DOAJ
【 摘 要 】

Machine learning algorithms based on neural networks are vulnerable to adversarial attacks. The use of attacks against authentication systems greatly reduces the accuracy of such a system, despite the complexity of generating a competitive example. As part of this study, a white-box adversarial attack on an authentication system was carried out. The basis of the authentication system is a neural network perceptron, trained on a dataset of frequency signatures of sign. For an attack on an atypical dataset, the following results were obtained: with an attack intensity of 25%, the authentication system availability decreases to 50% for a particular user, and with a further increase in the attack intensity, the accuracy decreases to 5%.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次