学位论文详细信息
Information-theoretic limitations of distributed information processing
distributed information processing;fundamental limits;information theory;decentralized estimation;Bayes risk;small ball probability;strong data processing inequality;distributed function computation;computation time;network reduction;diameter of network;statistical learning;adaptive composition;stability of learning algorithms;generalization error;mutual information;Gibbs algorithm;adaptive data analytics
Xu, Aolin
关键词: distributed information processing;    fundamental limits;    information theory;    decentralized estimation;    Bayes risk;    small ball probability;    strong data processing inequality;    distributed function computation;    computation time;    network reduction;    diameter of network;    statistical learning;    adaptive composition;    stability of learning algorithms;    generalization error;    mutual information;    Gibbs algorithm;    adaptive data analytics;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/95358/XU-DISSERTATION-2016.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

In a generic distributed information processing system, a number of agents connected by communication channels aim to accomplish a task collectively through local communications. The fundamental limits of distributed information processing problems depend not only on the intrinsic difficulty of the task, but also on the communication constraints due to the distributedness. In this thesis, we reveal these dependencies quantitatively under information-theoretic frameworks.We consider three typical distributed information processing problems: decentralized parameter estimation, distributed function computation, and statistical learning under adaptive composition. For the first two problems, we derive converse results on the Bayes risk and the computation time, respectively. For the last problem, we first study the relationship between the generalization capability of a learning algorithm and its stability property measured by the mutual information between its input and output, and then derive achievability results on the generalization error of adaptively composed learning algorithms. In all cases, we obtain general results on the fundamental limits with respect to a general model of the problem, so that the results can be applied to various specific scenarios. Our information-theoretic analyses also provide general approaches to inferring global properties of a distributed information processing system from local properties of its components.

【 预 览 】
附件列表
Files Size Format View
Information-theoretic limitations of distributed information processing 2014KB PDF download
  文献评价指标  
  下载次数:18次 浏览次数:79次