科技报告详细信息
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
Oxberry, Geoffrey M.1  Kostova-Vassilevska, Tanya1  Arrighi, Bill1  Chand, Kyle1 
[1]Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
关键词: proper orthogonal decomposition;    reduced order model;    snapshot;    incremental singular value decomposition;   
DOI  :  10.2172/1224940
RP-ID  :  LLNL-TR--669265
PID  :  OSTI ID: 1224940
学科分类:数学(综合)
美国|英语
来源: SciTech Connect
PDF
【 摘 要 】
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers??? test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.
【 预 览 】
附件列表
Files Size Format View
346KB PDF download
  文献评价指标  
  下载次数:18次 浏览次数:40次