21st International Conference on Computing in High Energy and Nuclear Physics | |
Scaling up ATLAS production system for the LHC Run 2 and beyond: project ProdSys2 | |
物理学;计算机科学 | |
Borodin, M.^1,2 ; De, K.^3 ; Garcia Navarro, J.^4 ; Golubkov, D.^1,5 ; Klimentov, A.^6 ; Maeno, T.^6 ; Vaniachine, A.^7 | |
Big Data Laboratory, National Research Centre, Kurchatov Institute, Moscow, Russia^1 | |
Department of Elementary Particle Physics, National Research Nuclear University, MEPhI, Moscow, Russia^2 | |
Physics Department, University of Texas, Arlington | |
TX, United States^3 | |
Instituto de Fisica Corpuscular, Universidad de Valencia, Spain^4 | |
Experimental Physics Department, Institute for High Energy Physics, Protvino | |
142281, Russia^5 | |
Physics Department, Brookhaven National Laboratory, Bldg. 510A, Upton | |
NY | |
11973, United States^6 | |
High Energy Physics Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne | |
IL | |
60439, United States^7 | |
关键词: ATLAS experiment; Data transformation; Database engine; Production system; Scalable production; Software applications; Workflow patterns; Workload management; | |
Others : https://iopscience.iop.org/article/10.1088/1742-6596/664/6/062005/pdf DOI : 10.1088/1742-6596/664/6/062005 |
|
学科分类:计算机科学(综合) | |
来源: IOP | |
![]() |
【 摘 要 】
The Big Data processing needs of the ATLAS experiment grow continuously, as more data and more use cases emerge. For Big Data processing the ATLAS experiment adopted the data transformation approach, where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, submitted by the ATLAS workload management system (PanDA) and executed on the Grid. Our experience shows that the rate of task submission grows exponentially over the years. To scale up the ATLAS production system for new challenges, we started the ProdSys2 project. PanDA has been upgraded with the Job Execution and Definition Interface (JEDI). Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. These workflows are being implemented in the Database Engine for Tasks (DEfT) that generates individual tasks for processing by JEDI. We report on the ATLAS experience with many-task workflow patterns in preparation for the LHC Run 2.
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
Scaling up ATLAS production system for the LHC Run 2 and beyond: project ProdSys2 | 4028KB | ![]() |