会议论文详细信息
17th International Workshop on Advanced Computing and Analysis Techniques in Physics Research
Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system
物理学;计算机科学
Meier, Konrad^1 ; Fleig, Georg^1 ; Hauth, Thomas^1 ; Janczyk, Michael^2 ; Quast, Günter^1 ; Von Suchodoletz, Dirk^2 ; Wiebelt, Bernd^2
Karlsruhe Institute of Technology, Institut für Experimentelle Kernphysik, Wolfgang-Gaede-Str. 1, Karlsruhe
76131, Germany^1
Albert-Ludwigs-Universitàt Freiburg, Professur fur Kommunikationssysteme, Hermann-Herder-Str. 10, Freiburg im Breisgau
79104, Germany^2
关键词: Computing infrastructures;    Dynamic provisioning;    Seamless integration;    Specialized software;    Static partitioning;    Virtualization layers;    Virtualized environment;    Workload managers;   
Others  :  https://iopscience.iop.org/article/10.1088/1742-6596/762/1/012012/pdf
DOI  :  10.1088/1742-6596/762/1/012012
学科分类:计算机科学(综合)
来源: IOP
PDF
【 摘 要 】

Experiments in high-energy physics (HEP) rely on elaborate hardware, software and computing systems to sustain the high data rates necessary to study rare physics processes. The Institut fr Experimentelle Kernphysik (EKP) at KIT is a member of the CMS and Belle II experiments, located at the LHC and the Super-KEKB accelerators, respectively. These detectors share the requirement, that enormous amounts of measurement data must be processed and analyzed and a comparable amount of simulated events is required to compare experimental results with theoretical predictions. Classical HEP computing centers are dedicated sites which support multiple experiments and have the required software pre-installed. Nowadays, funding agencies encourage research groups to participate in shared HPC cluster models, where scientist from different domains use the same hardware to increase synergies. This shared usage proves to be challenging for HEP groups, due to their specialized software setup which includes a custom OS (often Scientific Linux), libraries and applications. To overcome this hurdle, the EKP and data center team of the University of Freiburg have developed a system to enable the HEP use case on a shared HPC cluster. To achieve this, an OpenStack-based virtualization layer is installed on top of a bare-metal cluster. While other user groups can run their batch jobs via the Moab workload manager directly on bare-metal, HEP users can request virtual machines with a specialized machine image which contains a dedicated operating system and software stack. In contrast to similar installations, in this hybrid setup, no static partitioning of the cluster into a physical and virtualized segment is required. As a unique feature, the placement of the virtual machine on the cluster nodes is scheduled by Moab and the job lifetime is coupled to the lifetime of the virtual machine. This allows for a seamless integration with the jobs sent by other user groups and honors the fairshare policies of the cluster. The developed thin integration layer between OpenStack and Moab can be adapted to other batch servers and virtualization systems, making the concept also applicable for other cluster operators. This contribution will report on the concept and implementation of an OpenStack-virtualized cluster used for HEP workflows. While the full cluster will be installed in spring 2016, a test-bed setup with 800 cores has been used to study the overall system performance and dedicated HEP jobs were run in a virtualized environment over many weeks. Furthermore, the dynamic integration of the virtualized worker nodes, depending on the workload at the institute's computing system, will be described.

【 预 览 】
附件列表
Files Size Format View
Dynamic provisioning of a HEP computing infrastructure on a shared hybrid HPC system 2105KB PDF download
  文献评价指标  
  下载次数:10次 浏览次数:45次