Parallel programming requires a significant amount of developer effort, and creating optimized parallel code is even more time-consuming. In the end, tuned parallel codes typically only perform well for a single architecture, or even microarchitecture. This thesis focuses on SPMD code written in CUDA, noting that programs must obey a number of constraints to achieve high performance on an NVIDIA GPU. Under such constraints, source-level optimizations can improve the performance of CUDA code on Rigel, a MIMD accelerator architecture currently under development. Source-level optimizations can produce code for Rigel that runs significantly faster than naïve translations. In some cases, benchmarks run nearly four times faster, rivaling the performance of hand-optimized code. Unlike a GPU, Rigel allows for a flexible execution model, making it difficult to extract performance information that can be leveraged to get good performance on other architectures. CUDA code written for Rigel performs poorly when executed on a GPU, and is significantly slower than optimized CUDA code tuned for GPUs.
【 预 览 】
附件列表
Files
Size
Format
View
Achieving performance portability across parallel accelerator architectures