学位论文详细信息
Game of threads: Enabling asynchronous poisoning attacks
adversarial machine learningtrusted execution environmentSGXoperating systemsmulti-processingasynchronous stochastic gradient descentpoisoning attacksmachine learning
Sanchez Vicarte, Jose Rodrigo ; Fletcher ; Christopher W
关键词: adversarial machine learningtrusted execution environmentSGXoperating systemsmulti-processingasynchronous stochastic gradient descentpoisoning attacksmachine learning;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/106293/SANCHEZVICARTE-THESIS-2019.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

As data sizes continue to grow at an unprecedented rate, machine learning training is being forced to adopt asynchronous training algorithms to maintain performance and scalability. In asynchronous training, many threads share and update the model in a racy fashion to avoid inter-thread synchronization.This work studies the security implications of these codes by introducing asynchronous poisoning attacks. Our attack influences training outcome---e.g., degrades accuracy or biases the model towards an adversary-specified label---purely by scheduling asynchronous training threads in a malicious fashion. Since thread scheduling is outside the protections of modern trusted execution environments (TEEs), e.g., Intel SGX, our attack bypasses these protections even when the training set can be verified as correct. To the best of our knowledge, this represents the first example where a class of applications loses integrity guarantees, despite being protected by enclave-based TEEs such as Intel SGX.We demonstrate both accuracy degradation and model biasing attacks on the CIFAR-10 image recognition task using ResNet-style DNNs, attacking an asynchronous training implementation published by PyTorch. We perform a deeper analysis on a LeNet-style DNN. We also perform proof-of-concept experiments to validate our assumptions on an SGX-enabled machine. Our most powerful accuracy degradation attack makes no assumptions about the underlying training algorithm aside from the algorithm supporting racy updates, yet is capable of returning a fully trained network back to the accuracy of an untrained network, or to some accuracy in between based on attacker-controlled parameters. Our model biasing attack is capable of biasing the model towards an attacker-chosen label by up to $\sim2\times$ the label's normal prediction rate.

【 预 览 】
附件列表
Files Size Format View
Game of threads: Enabling asynchronous poisoning attacks 4528KB PDF download
  文献评价指标  
  下载次数:7次 浏览次数:6次