A number of important problems that arise in various application domains can be formulated as a distributed convex constrained minimization problem over a multi-agent network.The problem is usually defined as a sum of convex objective functions over an intersection of convex constraint sets.The first part of this thesis is focused on the development and analysis of efficient distributed algorithms for a constrained convex optimization problem over a multi-agent network where each agent has its own objective function and constraint set. We propose gradient descent algorithms with random projections which use various communication protocols.First, we present a distributed random projection (DRP) algorithm whereby each agent exchanges local information only with its immediate neighbors at each iteration.With reasonable assumptions, we prove that the iterates of all agents converge to the same pointin the optimal set with probability 1.In addition, we consider a variant of the method that uses a mini-batch ofconsecutive random projections and establish its convergence.Experiments on distributed support vector machines demonstrate fast convergence of the DRP algorithm.It actually shows that the number of iterations required for convergence is much smallerthan that for scanning over all training samples just once.Second, we propose an asynchronous gossip-based random projection (GRP) algorithm that solves the distributed problemusing gossip type communications and local computations.We analyze the convergence properties of the algorithm for an uncoordinated diminishing stepsizeand a constant stepsize.For a diminishing stepsize, we prove thatthe iterates of all agents converge to the same optimal point with probability 1.For a constant stepsize, we establish an error bound on the expected distance from the optimal point to the iterates of the algorithm.In addition, we consider a variant of the method that uses a mini-batch of consecutive random projections and, also, establish its convergence. Furthermore, we provide simulation results on a distributed robust model predictive control problem.In the second part of the thesis, we discuss an efficient epoch gradient descent algorithm for obtaining fast and exact solutions of linear support vector machines (SVMs).SVMs penalized with the popular hinge-loss are strongly convex but they do not have Lipschitz continuous gradient. We find SVMs that have both strong-convexity and Lipschitz continuous gradient using a smooth approximation technique.
【 预 览 】
附件列表
Files
Size
Format
View
Optimization over networks: Efficient algorithms and analysis