学位论文详细信息
Distributed optimization in multi-agent systems: applications to distributed regression
multi-agent systems;stochastic optimization;convex optimization;distributed optimization;regression;distributed regression
Srinivasan, Sundhar Ram
关键词: multi-agent systems;    stochastic optimization;    convex optimization;    distributed optimization;    regression;    distributed regression;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/16105/Srinivasan_SundharRam.pdf?sequence=2&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

The context for this work is cooperative multi-agent systems (MAS). An agent is an intelligent entity that can measure some aspect of its environment, processinformation and possibly influence the environment through its action. A cooperative MAS can be defined as a loosely coupled network of agents that interact and cooperate to solve problems that are beyond the individualcapabilities or knowledge of each agent.The focus of this thesis is distributed stochastic optimization in multi-agentsystems. In distributed optimization, the complete optimization problem is notavailable at a single location but is distributed among different agents. The distributed optimization problem is additionally stochastic when the information available to each agent is with stochastic errors. Communication constraints,lack of global information about the network topology and the absence of coordinating agents make it infeasible to collect all the information at a singlelocation and then treat it as a centralized optimization problem. Thus, the problem has to be solved using algorithms that are distributed, i.e., differentparts of the algorithm are executed at different agents, and local, i.e., each agentuses only information locally available to it and other information it can obtainfrom its immediate neighbors.In this thesis, we will primarily focus on the specific problem of minimizing a sum of functions over a constraint set, when each component function is knownpartially (with stochastic errors) to a unique agent. The constraint set is known to all the agents. We propose three distributed and local algorithms, establishasymptotic convergence with diminishing stepsizes and obtain rate of convergence results. Stochastic errors, as we will see, arise naturally when theobjective function known to an agent has a random variable with unknown statistics. Additionally, stochastic errors also model communication and quantization errors. The problem is motivated by distributed regression insensor networks and power control in cellular systems.We also discuss an important extension to the above problem. In the extension, the network goal is to minimize a global function of a sum of component functions over a constraint set. Each component function is known to a unique network agent. The global function and the constraint set are knownto all the agents. Unlike the previous problem, this problem is not stochastic. However, the objective function in this problem is more general. We propose analgorithm to solve this problem and establish its convergence.

【 预 览 】
附件列表
Files Size Format View
Distributed optimization in multi-agent systems: applications to distributed regression 995KB PDF download
  文献评价指标  
  下载次数:14次 浏览次数:27次