会议论文详细信息
21st International Conference on Computing in High Energy and Nuclear Physics
Online data handling and storage at the CMS experiment
物理学;计算机科学
Andre, J.-M.^5 ; Andronidis, A.^2 ; Behrens, U.^1 ; Branson, J.^4 ; Chaze, O.^2 ; Cittolin, S.^4 ; Darlea, G.-L.^6 ; Deldicque, C.^2 ; Demiragli, Z.^6 ; Dobson, M.^2 ; Dupont, A.^2 ; Erhan, S.^3 ; Gigi, D.^2 ; Glege, F.^2 ; Gómez-Ceballos, G.^6 ; Hegeman, J.^2 ; Holzner, A.^4 ; Jimenez-Estupiñán, R.^2 ; Masetti, L.^2 ; Meijers, F.^2 ; Meschi, E.^2 ; Mommsen, R.K.^5 ; Morovic, S.^2 ; Nuñez-Barranco-Fernández, C.^2 ; O'Dell, V.^5 ; Orsini, L.^2 ; Paus, C.^6 ; Petrucci, A.^2 ; Pieri, M.^4 ; Racz, A.^2 ; Roberts, P.^2 ; Sakulin, H.^2 ; Schwick, C.^2 ; Stieger, B.^2 ; Sumorok, K.^6 ; Veverka, J.^6 ; Zaza, S.^2 ; Zejdl, P.^5
DESY, Hamburg, Germany^1
CERN, Geneva, Switzerland^2
University of California, Los Angeles
CA, United States^3
University of California, San Diego
CA, United States^4
FNAL, Chicago
IL, United States^5
Massachusetts Institute of Technology, Cambridge
MA, United States^6
关键词: Data acquisition system;    Distributed file systems;    High-level triggers;    Network equipment;    Off-line processing;    Software and hardwares;    Switching technology;    Transfer systems;   
Others  :  https://iopscience.iop.org/article/10.1088/1742-6596/664/8/082009/pdf
DOI  :  10.1088/1742-6596/664/8/082009
学科分类:计算机科学(综合)
来源: IOP
PDF
【 摘 要 】

During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB/s. An estimated bandwidth of 7GB/s in concurrent read/write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.

【 预 览 】
附件列表
Files Size Format View
Online data handling and storage at the CMS experiment 5301KB PDF download
  文献评价指标  
  下载次数:15次 浏览次数:30次