学位论文详细信息
Adversarial attacks and defenses for generative models
Adversarial Machine learning, generative models, attacks, defenses
Agarwal, Rishika ; Koyejo ; Sanmi ; Li ; Bo
关键词: Adversarial Machine learning, generative models, attacks, defenses;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/104943/AGARWAL-THESIS-2019.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

Adversarial Machine learning is a field of research lying at the intersection of Machine Learning and Security, which studies vulnerabilities of Machine learning models that make them susceptible to attacks. The attacks are inflicted by carefully designing a perturbed input which appears benign, but fools the models to perform in unexpected ways. To date, most work in adversarial attacks and defenses has been done for classification models. However, generative models are susceptible to attacks as well, and thus warrant attention. We study some attacks for generative models like Autoencoders and Variational Autoencoders. We discuss the relative effectiveness of the attack methods, and explore some simple defense methods against the attacks.

【 预 览 】
附件列表
Files Size Format View
Adversarial attacks and defenses for generative models 66704KB PDF download
  文献评价指标  
  下载次数:19次 浏览次数:14次