学位论文详细信息
Defending distributed systems against adversarial attacks: consensus, consensus-based learning, and statistical learning
Fault-tolerance;Adversaries;Security;Consensus;Optimization;Hypothesis testing;Statistical learning
Su, Lili
关键词: Fault-tolerance;    Adversaries;    Security;    Consensus;    Optimization;    Hypothesis testing;    Statistical learning;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/98441/SU-DISSERTATION-2017.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】
A distributed system consists of networked components that interact with each other in order to achieve a common goal. Given the ubiquity of distributed systems and their vulnerability to adversarial attacks, it is crucial to design systems that are provably secured. In this dissertation, we propose and explore the problems of performing consensus, consensus-based learning, and statistical learning in the presence of malicious components.(1) Consensus: In this dissertation, we explore the influence of communication range on the computability of reaching iterative approximate consensus. Particularly, we characterize the tight topological condition on the networks for consensus to be achievable in the presence of Byzantine components. Our results bridge the gap of previous work.(2) Consensus-Based Learning: We propose, to the best of our knowledge, consensus-based Byzantine-tolerant learning problems: Consensus-Based Multi-Agent Optimization and Consensus-Based Distributed Hypothesis Testing. For the former, we characterize the performance degradation, and design efficient algorithms that can achieve the optimal fault-tolerance performance. For the latter, we propose, as far as we know, thefirst learning algorithm under which the good agents can collaboratively identify the underlying truth.(3)Statistical Learning: Finally, we explore distributed statistical learning, where the distributed system is captured by the server-client model. We develop a distributed machine learning algorithm that is able to (1) tolerate Byzantine failures, (2) accurately learn a highly complex model with low local data volume, and (3) converge exponentially fast using logarithmic communication rounds.
【 预 览 】
附件列表
Files Size Format View
Defending distributed systems against adversarial attacks: consensus, consensus-based learning, and statistical learning 905KB PDF download
  文献评价指标  
  下载次数:10次 浏览次数:12次