学位论文详细信息
Accelerated first-order optimization methods using inertia and error bounds
Optimization;Convergence analysis;Accelerated first-order methods;Subgradient methods
Johnstone, Patrick Royce
关键词: Optimization;    Convergence analysis;    Accelerated first-order methods;    Subgradient methods;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/97298/JOHNSTONE-DISSERTATION-2017.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

Optimization is an important discipline of applied mathematics with far-reaching applications. Optimization algorithms often form the backbone of practical systems in machine learning, image processing, signal processing, computer vision, data analysis, and statistics. In an age of massive data sets and huge numbers of variables, a deep understanding of optimization is a necessary condition for developing scalable, computationally inexpensive, and reliable algorithms. In this thesis we design and analyze efficient algorithms for solving the large-scale nonsmooth optimization problems arising in modern signal processing and machine learning applications. The focus is on first-order methods which have low per-iteration complexity and can exploit problem structure to a high degree. First-order methods have the capacity to address large-scale problems for which all alternative methods fail. However, first-order methods can take many iterations to reach the desired accuracy. This has led optimization researchers to ask the following question: is it possible to improve the convergence rate of first-order methods without jeopardizing their low per-iteration complexity? In this thesis, we address this question in three areas. Firstly we investigate the use of inertia to accelerate the convergence of proximal gradient methods for convex composite optimization problems. We pay special attention to the famous lasso problem for which we develop an improved version of the well-known Fast Iterative Soft-Thresholding Algorithm. Secondly we investigate the use of inertia for nonconvex composite problems, making use of the Kurdukya-Lojaziewicz inequality in our analysis. Finally, when the objective function satisfies an error bound which is fairly common in practice, we develop stepsize selections for the subgradient method which significantly outperform the classical approach. The overarching message of this thesis is the following: with careful analysis and design, the convergence rate of first-order methods can be significantly improved.

【 预 览 】
附件列表
Files Size Format View
Accelerated first-order optimization methods using inertia and error bounds 1075KB PDF download
  文献评价指标  
  下载次数:13次 浏览次数:7次