学位论文详细信息
Slipstream Execution Mode for CMP-based Shared Memory Systems
Slipstream;Speculation;Computer Architecture;Shared Memory;Chip multiprocessor;OpenMP
Ibrahim, Khaled Zakarya Moustafa ; Gregory T. Byrd, Committee Chair,Thomas M. Conte, Committee Member,Eric Rotenberg, Committee Member,Frank Mueller, Committee Member,Ibrahim, Khaled Zakarya Moustafa ; Gregory T. Byrd ; Committee Chair ; Thomas M. Conte ; Committee Member ; Eric Rotenberg ; Committee Member ; Frank Mueller ; Committee Member
University:North Carolina State University
关键词: Slipstream;    Speculation;    Computer Architecture;    Shared Memory;    Chip multiprocessor;    OpenMP;   
Others  :  https://repository.lib.ncsu.edu/bitstream/handle/1840.16/3927/etd.pdf?sequence=1&isAllowed=y
美国|英语
来源: null
PDF
【 摘 要 】

Scalability of applications on distributed shared-memory (DSM) multiprocessors is limited by communication and synchronization overheads. At some point, using more processors to increase parallelism yields diminishing returns or even degrades performance. When increasing concurrency is futile, we propose an additional mode of execution, called slipstream mode, that instead enlists extra processors to assist parallel tasks by reducing perceived overheads.We consider DSM multiprocessors built from dual-processor chip multiprocessor (CMP) nodes (e.g., IBM Power-4 CMP) with shared L2 cache. A parallel task is allocated on one processor of each CMP node. The other processor of each node executes a reduced version of the same task. The reduced version skips shared-memory stores and synchronization, allowing it to run ahead of the true task. Even with the skipped operations, the reduced task makes accurate forward progress and generates an accurate reference stream, because branches and addresses depend primarily on private data. Slipstream execution mode yields multiple benefits. First, the reduced task prefetches data on behalf of the true task. Second, reduced tasks provide a detailed picture of future reference behavior, enabling a number of optimizations aimed at accelerating coherence events. We investigate a well-known optimization, self-invalidation. We also investigate providing confidence mechanism for speculation after barrier synchronization.We investigate the implementation of an OpenMP compiler that supports slipstream execution mode. We discuss how each OpenMP construct can be implemented to take advantage of slipstream mode, and we present a minor extension that allows runtime or compile-time control of slipstream execution. We also investigate the interaction between slipstream mechanisms and OpenMP scheduling. Our implementation supports both static and dynamic scheduling in slipstream mode.For multiprocessor systems with up to 16 CMP nodes, Slipstream mode is 12-19% faster with prefetching only. With self-invalidation also enabled, performance is improved by as much as 29%. We extended slipstream mode to provide a confidence mechanism for barrier speculation. This mechanism identifies dependencies and tries to avoid dependency violations that lead to misspeculations (and subsequently rollbacks). Rollbacks are reduced by up to 95% and the improvement in performance is up to 13%.Slipstream execution mode enables a wide range of optimizations based on an accurate future image of the program behavior. It does not require custom auxiliary hardware tables used by history-based predictors.

【 预 览 】
附件列表
Files Size Format View
Slipstream Execution Mode for CMP-based Shared Memory Systems 850KB PDF download
  文献评价指标  
  下载次数:13次 浏览次数:14次