期刊论文详细信息
Frontiers in Big Data
GPU-Accelerated Machine Learning Inference as a Service for Computing in Neutrino Experiments
Benjamin Hawks1  Kevin Pedro1  Michael Wang1  Maria Acosta Flechas1  Tingjun Yang1  Burt Holzman1  Kyle Knoepfel1  Jeffrey Krupa2  Philip Harris2  Nhan Tran3 
[1] Fermi National Accelerator Laboratory, Batavia, IL, United States;Massachusetts Institute of Technology, Cambridge, MA, United States;Northwestern University, Evanston, IL, United States;
关键词: machine learning;    heterogeneous (CPU+GPU) computing;    GPU (graphics processing unit);    particle physics;    cloud computing (SaaS);   
DOI  :  10.3389/fdata.2020.604083
来源: DOAJ
【 摘 要 】

Machine learning algorithms are becoming increasingly prevalent and performant in the reconstruction of events in accelerator-based neutrino experiments. These sophisticated algorithms can be computationally expensive. At the same time, the data volumes of such experiments are rapidly increasing. The demand to process billions of neutrino events with many machine learning algorithm inferences creates a computing challenge. We explore a computing model in which heterogeneous computing with GPU coprocessors is made available as a web service. The coprocessors can be efficiently and elastically deployed to provide the right amount of computing for a given processing task. With our approach, Services for Optimized Network Inference on Coprocessors (SONIC), we integrate GPU acceleration specifically for the ProtoDUNE-SP reconstruction chain without disrupting the native computing workflow. With our integrated framework, we accelerate the most time-consuming task, track and particle shower hit identification, by a factor of 17. This results in a factor of 2.7 reduction in the total processing time when compared with CPU-only production. For this particular task, only 1 GPU is required for every 68 CPU threads, providing a cost-effective solution.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次