期刊论文详细信息
IEEE Access
GPU Static Modeling Using PTX and Deep Structured Learning
Joao Guerreiro1  Nuno Roma1  Aleksandar Ilic2  Pedro Tomas2 
[1] INESC-ID, Instituto Superior T&x00E9;cnico, Universidade de Lisboa, Lisbon, Portugal;
关键词: GPU;    DVFS;    modeling;    scaling-factors;    energy savings;   
DOI  :  10.1109/ACCESS.2019.2951218
来源: DOAJ
【 摘 要 】

In the quest for exascale computing, energy-efficiency is a fundamental goal in high-performance computing systems, typically achieved via dynamic voltage and frequency scaling (DVFS). However, this type of mechanism relies on having accurate methods of predicting the performance and power/energy consumption of such systems. Unlike previous works in the literature, this research focuses on creating novel GPU predictive models that do not require run-time information from the applications. The proposed models, implemented using recurrent neural networks, take into account the sequence of GPU assembly instructions (PTX) and can accurately predict changes in the execution time, power and energy consumption of applications when the frequencies of different GPU domains (core and memory) are scaled. Validated with 24 applications on GPUs from different NVIDIA microarchitectures (Turing, Volta, Pascal and Maxwell), the proposed models attain a significant accuracy. Particularly, the obtained power consumption scaling model provides an average error rate of 7.9% (Tesla T4), 6.7% (Titan V), 5.9% (Titan Xp) and 5.4% (GTX Titan X), which is comparable to state-of-the-art run-time counter-based models. When using the models to select the minimum-energy frequency configuration, significant energy savings can be attained: 8.0% (Tesla T4), 6.0% (Titan V), 29.0% (Titan Xp) and 11.5% (GTX Titan X).

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:1次