学位论文详细信息
Multi-model-based defense against adversarial examples for neural networks
security and privacy, machine learning
Srisakaokul, Siwakorn ; Xie ; Tao ; Li ; Bo
关键词: security and privacy, machine learning;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/108026/SRISAKAOKUL-THESIS-2020.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

Neural networks recently have been used to solve many real-world tasks such as image recognition and can achieve high effectiveness on these tasks. Despite being popularly used in many applications, neural network models have been found to be vulnerable to adversarial examples, i.e., carefully crafted examples aiming to mislead machine learning models. Adversarial examples can pose potential risks on safety/security-critical applications. Existing defense approaches are still vulnerable to emerging attacks, especially in a white-box attack scenario. In this thesis, we focus on mitigating the adversarial attacks by improving machine learning models to be more robust against those attacks.In particular, we propose a new defense approach, named MulDef, based on robustness diversity. Our approach consists of (1) a general defense framework based on diverse models and (2) a technique for generating diverse models to achieve high defense capability. Our framework generates multiple models (constructed from the target model) to form a model family. The model family is designed to achieve robustness diversity (i.e., an adversarial example crafted to attack one model may not succeed in attacking other models in the family). At runtime, a model is randomly selected from the family to process each input example. Our evaluation results show that MulDef (with only up to 5 models in the family) can substantially improve the target model's robustness against adversarial examples by 19-78% in a white-box attack scenario among MNIST, CIFAR-10, and Tiny ImageNet datasets, while maintaining similar accuracy on legitimate examples. Our general framework can also inspire rich future research to construct a desirable model family achieving higher robustness diversity.

【 预 览 】
附件列表
Files Size Format View
Multi-model-based defense against adversarial examples for neural networks 693KB PDF download
  文献评价指标  
  下载次数:41次 浏览次数:25次