会议论文详细信息
20th International Conference on Computing in High Energy and Nuclear Physics
Xrootd data access for LHC experiments at the INFN-CNAF Tier-1
物理学;计算机科学
Gregori, Daniele^1 ; Boccali, Tommaso^2 ; Noferini, Francesco^3 ; Prosperini, Andrea^1 ; Ricci, Pier Paolo^1 ; Sapunenko, Vladimir^1 ; Vagnoni, Vincenzo^4
INFN-CNAF, Viale Berti-Pichat 6/2, Bologna
40127, Italy^1
INFN-Pisa, Largo B. Pontecorvo 3, Pisa
56127, Italy^2
Centro Fermi, Piazza del Viminale 1, Rome
00184, Italy^3
INFN-Bologna, Via Irnerio 46, Bologna
40126, Italy^4
关键词: Hierarchical storage;    Large scale tests;    Long-term storage;    Management systems;    Mass storage system;    Parallel file system;    Storage manager;    Storage resources;   
Others  :  https://iopscience.iop.org/article/10.1088/1742-6596/513/4/042023/pdf
DOI  :  10.1088/1742-6596/513/4/042023
学科分类:计算机科学(综合)
来源: IOP
PDF
【 摘 要 】

The Mass Storage System installed at the INFN-CNAF Tier-1 is one of the biggest hierarchical storage facilities in Europe. It currently provides storage resources for about 12% of all LHC data, as well as for other experiments. The Grid Enabled Mass Storage System (GEMSS) is the current solution implemented at CNAF and it is based on a custom integration between a high performance parallel file system (General Parallel File System, GPFS) and a tape management system for long-term storage on magnetic media (Tivoli Storage Manager, TSM). Data access to Grid users is being granted since several years by the Storage Resource Manager (StoRM), an implementation of the standard SRM interface, widely adopted within the WLCG community. The evolving requirements from the LHC experiments and other users are leading to the adoption of more flexible methods for accessing the storage. These include the implementation of the so-called storage federations, i.e. geographically distributed federations allowing direct file access to the federated storage between sites. A specific integration between GEMSS and Xrootd has been developed at CNAF to match the requirements of the CMS experiment. This was already implemented for the ALICE use case, using ad-hoc Xrootd modifications. The new developments for CMS have been validated and are already available in the official Xrootd builds. This integration is currently in production and appropriate large scale tests have been made. In this paper we present the Xrootd solutions adopted for ALICE, CMS, ATLAS and LHCb to increase the availability and optimize the overall performance.

【 预 览 】
附件列表
Files Size Format View
Xrootd data access for LHC experiments at the INFN-CNAF Tier-1 940KB PDF download
  文献评价指标  
  下载次数:31次 浏览次数:17次