学位论文详细信息
Stochastic approximation schemes for stochastic optimization and variational problems: adaptive steplengths, smoothing, and regularization
Stochastic approximation methods;Stochastic optimization;Stochastic variational inequalities;Game theory
Yousefian, Seyed Farzad
关键词: Stochastic approximation methods;    Stochastic optimization;    Stochastic variational inequalities;    Game theory;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/45385/Seyed%20Farzad_Yousefian.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】
Stochastic approximation (SA) methods,first proposed by Robbins and Monro in 1951 for root- finding problems,have been widely used in the literature to solve problems arising from stochastic convex optimization,stochastic Nash games and more recently stochastic variational inequalities. Several challenges arise in thedevelopment of SA schemes. First, little guidance is provided on the choice of the steplength sequence.Second, most variants of these schemes in optimization require differentiability of the objective function andLipschitz continuity of the gradient. Finally, strong convexity of the objective function is another requirementthat is a strong assumption to hold. Motivated by these challenges, this thesis focuses on studyingresearch challenges related to the SA methods in three different areas: (i) steplengths, (ii) smoothing, and(iii) regularization.Thefirst part of this thesis pertains to solving strongly convex differentiable stochastic optimizationproblems using SA methods. The performance of standard SA implementations can vary significantly basedon the choice of the steplength sequence, and in general, little guidance is provided about good choices.Motivated by this gap, we present two adaptive steplength schemes equipped with convergence theory thataim to overcome some of the reliance on user-specefi c parameters. Of these, thefirst scheme, referred toas a recursive steplength stochastic approximation (RSA) scheme, minimizes the error bounds to derive arule that expresses the steplength at a given iteration as a simple function of the steplength at the previousiteration and certain problem parameters. The second scheme, termed as a cascading steplength stochasticapproximation (CSA) scheme, maintains the steplength sequence as a piecewise-constant decreasing functionwith the reduction in the steplength occurring when a suitable error threshold is met. We then allow fornondiff erentiable objectives but with bounded subgradients over a certain domain. In such a regime, wepropose a local smoothing technique, based on random local perturbations of the objective function thatleads to a differentiable approximation of the function and a Lipschitzian property for the gradient ofthe approximation. This facilitates the development of an adaptive steplength stochastic approximationframework, which now requires sampling in the product space of the original measure and the artifi callyintroduced distribution. Motivated by problems arising in decentralized control problems and non-cooperative Nash games, inthe second part of this thesis, we consider a class of strongly monotone Cartesian variational inequalityproblems, where the mappings either contain expectations or their evaluations are corrupted by error. Suchcomplications are captured under the umbrella of Cartesian stochastic variational inequality (CSVI) problemsand we consider solving such problems via SA schemes. Spece fically, along similar directions to the RSAscheme, a stepsize rule is constructed for strongly monotone stochastic variational inequality problems. Theproposed scheme is seen to produce sequences that are guaranteed to converge almost surely to the uniquesolution of the problem. To cope with networked multi-agent generalizations, we provide requirements underwhich independently chosen steplength rules still possess desirable almost-sure convergence properties. Toaddress non-smoothness, we consider a regime where Lipschitz constants on the map are either unavailable ordi fficult to derive. Here, we generalize the aforementioned smoothing scheme for deriving an approximationof the original mapping, which is then shown to be Lipschitz continuous with a prescribed constant. Usingthis technique, we introduce a locally randomized SA algorithm and provide almost sure convergence theoryfor the resulting sequence of iterates to an approximate solution of the original CSVI problem.In the third part of this thesis, we consider a stochastic variational inequality (SVI) problem with acontinuous and monotone mapping over a compact and convex set. Traditionally, stochastic approximationschemes for SVIs have relied on strong monotonicity and Lipschitzian properties of the underlying map. Wepresent a regularized smoothed SA (RSSA) scheme wherein stepsize, smoothing, and regularization parametersare updated after every iteration. Under suitable assumptions on the sequences, we show that thealgorithm generates iterates that converge to a solution the SVI problem in an almost-sure sense. Additionally,we provide rate estimates that relate iterates to their counterparts derived from the Tikhonov trajectoryassociated with a deterministic problem.
【 预 览 】
附件列表
Files Size Format View
Stochastic approximation schemes for stochastic optimization and variational problems: adaptive steplengths, smoothing, and regularization 2685KB PDF download
  文献评价指标  
  下载次数:3次 浏览次数:12次