期刊论文详细信息
NEUROCOMPUTING 卷:267
On the distributed optimization over directed networks
Article
Xi, Chenguang1  Wu, Qiong2  Khan, Usman A.1 
[1] Tufts Univ, Dept Elect & Comp Engn, 161 Coll Ave, Medford, MA 02155 USA
[2] Tufts Univ, Dept Math, 503 Boston Ave, Medford, MA 02155 USA
关键词: Distributed optimization;    Multi-agent networks;    Directed graphs;    Distributed subgradient descent;   
DOI  :  10.1016/j.neucom.2017.06.038
来源: Elsevier
PDF
【 摘 要 】

In this paper, we propose a distributed algorithm, called Directed-Distributed Subgradient Descent (D-DSD), to solve multi-agent optimization problems over directed graphs. Existing algorithms mostly deal with similar problems under the assumption of undirected networks, i.e., requiring the weight matrices to be doubly-stochastic. The row-stochasticity of the weight matrix guarantees that all agents reach consensus, while the column-stochasticity ensures that each agent's local (sub)gradient contributes equally to the global objective. In a directed graph, however, it may not be possible to construct a doubly-stochastic weight matrix in a distributed manner. We overcome this difficulty by augmenting an additional variable for each agent to record the change in the state evolution. In each iteration, the algorithm simultaneously constructs a row-stochastic matrix and a column-stochastic matrix instead of only a doubly-stochastic matrix. The convergence of the new weight matrix, depending on the row-stochastic and column-stochastic matrices, ensures agents to reach both consensus and optimality. The analysis shows that the proposed algorithm converges at a rate of O(Ink/root k), where k is the number of iterations. (C) 2017 Elsevier B.V. All rights reserved.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_neucom_2017_06_038.pdf 784KB PDF download
  文献评价指标  
  下载次数:1次 浏览次数:0次