会议论文详细信息
16th International workshop on Advanced Computing and Analysis Techniques in physics research
Multilevel Workflow System in the ATLAS Experiment
物理学;计算机科学
Borodin, M.^1 ; De, K.^2 ; Navarro, J Garcia^3 ; Golubkov, D.^4,5 ; Klimentov, A.^6 ; Maeno, T.^6 ; Vaniachine, A.^7
Department of Elementary Particle Physics, National Research Nuclear University MEPhI, Moscow
117513, Russia^1
Physics Department, University of Texas Arlington, Arlington
TX
76019, United States^2
Instituto de Fisica Corpuscular, Universidad de Valencia, Paterna
E-46980, Spain^3
Experimental Physics Department, Institute for High Energy Physics, Protvino
142281, Russia^4
Big Data Laboratory, National Research Centre Kurchatov Institute, Moscow
123182, Russia^5
Physics Department, Brookhaven National Laboratory, Bldg. 510A, Upton
NY
11973, United States^6
High Energy Physics Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne
IL
60439, United States^7
关键词: ATLAS experiment;    Data preparation;    Energy depositions;    Physics analysis;    Production manager;    Production system;    Work-flow systems;    Workload management;   
Others  :  https://iopscience.iop.org/article/10.1088/1742-6596/608/1/012015/pdf
DOI  :  10.1088/1742-6596/608/1/012015
学科分类:计算机科学(综合)
来源: IOP
PDF
【 摘 要 】
The ATLAS experiment is scaling up Big Data processing for the next LHC run using a multilevel workflow system comprised of many layers. In Big Data processing ATLAS deals with datasets, not individual files. Similarly a task (comprised of many jobs) has become a unit of the ATLAS workflow in distributed computing, with about 0.8M tasks processed per year. In order to manage the diversity of LHC physics (exceeding 35K physics samples per year), the individual data processing tasks are organized into workflows. For example, the Monte Carlo workflow is composed of many steps: generate or configure hard-processes, hadronize signal and minimum-bias (pileup) events, simulate energy deposition in the ATLAS detector, digitize electronics response, simulate triggers, reconstruct data, convert the reconstructed data into ROOT ntuples for physics analysis, etc. Outputs are merged and/or filtered as necessary to optimize the chain. The bi-level workflow manager - ProdSys2 - generates actual workflow tasks and their jobs are executed across more than a hundred distributed computing sites by PanDA - the ATLAS job-level workload management system. On the outer level, the Database Engine for Tasks (DEfT) empowers production managers with templated workflow definitions. On the next level, the Job Execution and Definition Interface (JEDI) is integrated with PanDA to provide dynamic job definition tailored to the sites capabilities. We report on scaling up the production system to accommodate a growing number of requirements from main ATLAS areas: Trigger, Physics and Data Preparation.
【 预 览 】
附件列表
Files Size Format View
Multilevel Workflow System in the ATLAS Experiment 1188KB PDF download
  文献评价指标  
  下载次数:7次 浏览次数:13次