学位论文详细信息
LP and SDP extended formulations: Lower bounds and approximation algorithms
Extended formulation;Linear programming;Semidefinite programming;Machine learning;Optimization
Roy, Aurko ; Pokutta, Sebastian Computer Science Xu, Huan Vempala, Santosh Randall, Dana Dey, Santanu Blekherman, Greg ; Pokutta, Sebastian
University:Georgia Institute of Technology
Department:Computer Science
关键词: Extended formulation;    Linear programming;    Semidefinite programming;    Machine learning;    Optimization;   
Others  :  https://smartech.gatech.edu/bitstream/1853/58666/1/ROY-DISSERTATION-2017.pdf
美国|英语
来源: SMARTech Repository
PDF
【 摘 要 】

In this thesis we study various aspects of linear and semidefiniteprograms including their limitations in approximating various combinatorialoptimization problems as well as applications of theseparadigms in solving problems in machine learning.In the first part we show inapproximability resultsfor polynomial sized LP and SDPs for severalproblems. For the class of Constraint Satisfaction Problems (CSPs)it is known that general LPs and SDPs are no more powerfulthan the Sherali-Adams and Lasserre hierarchies respectively.We show lower bounds for general LPs and SDPs for several non-CSPproblems such as Matching, Matching on 3-regular graphs,Independent Set, Vertex Cover, (non-uniform) Sparsest cutand (non-uniform) Balanced Separator.We also obtain the first general SDP inapproximability resultfor Maximum Cut: any polynomial sized SDP cannotdo better than 15/16. We also show that contrary to the situation withCSPs, there are problems such as Independent Set where the Lasserre SDPhierarchy is a suboptimal relaxation compared to even linear sized LPrelaxations.The second part of the thesis deals with applications of theseparadigms to problems in machine learning. In particular,we study a recent cost function proposedfor hierarchical clustering and show how to write aninteger program describing the convex hull of this problem.We also show that rounding "spreading metric" type of relaxationsleads to improved approximation guarantees for this problem.We also study another classic problem in machine learning, namelyreinforcement learning in a robust setting, where thetransition matrices corresponding to every action is notknown explicitly but may lie in some (convex) uncertainty setand may be chosen adversarially. We give approximation algorithmsto compute an (approximate) optimal policy in this setting.The main ingredient of this work is to define "robust"variants of classical Q-iterations and TD-iterations, wherethe update involves an additional linear optimizationstep. We prove convergence under appropriate choice ofstep lengths and discount factor.

【 预 览 】
附件列表
Files Size Format View
LP and SDP extended formulations: Lower bounds and approximation algorithms 2736KB PDF download
  文献评价指标  
  下载次数:8次 浏览次数:13次