期刊论文详细信息
NEUROCOMPUTING 卷:453
Multi-objective search of robust neural architectures against multiple types of adversarial attacks
Article
Liu, Jia1  Jin, Yaochu1 
[1] Univ Surrey, Dept Comp Sci, Surrey GU2 7XH, Surrey, England
关键词: Multi-objective evolutionary algorithm;    Adversarial attacks;    Neural architecture search;    Robustness;   
DOI  :  10.1016/j.neucom.2021.04.111
来源: Elsevier
PDF
【 摘 要 】

Many existing deep learning models are vulnerable to adversarial examples that are imperceptible to humans. To address this issue, various methods have been proposed to design network architectures that are robust to one particular type of adversarial attacks. It is practically impossible, however, to predict beforehand which type of attacks a machine learn model may suffer from. To address this challenge, we propose to search for deep neural architectures that are robust to five types of well-known adversarial attacks using a multi-objective evolutionary algorithm. To reduce the computational cost, a normalized error rate of a randomly chosen attack is calculated as the robustness for each newly generated neural architecture at each generation. All non-dominated network architectures obtained by the proposed method are then fully trained against randomly chosen adversarial attacks and tested on two widely used datasets. Our experimental results demonstrate the superiority of optimized neural architectures found by the proposed approach over state-of-the-art networks that are widely used in the literature in terms of the classification accuracy under different adversarial attacks. (c) 2021 Elsevier B.V. All rights reserved.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_neucom_2021_04_111.pdf 1468KB PDF download
  文献评价指标  
  下载次数:5次 浏览次数:0次