Among the various costs of a context switch, its impact on the performanceof L2 caches is the most significant because of the resulting high misspenalty. To mitigate the impact of context switches, several OS approacheshave been proposed to reduce the number of context switches. Nevertheless,frequent context switches are inevitable in certain cases and result insevere L2 cache performance degradation. Moreover, traditional prefetchingtechniques are ineffective in the face of context switches as theirprediction tables are also subject to loss of content during a contextswitch.To reduce the impact of frequent context switches, we propose restoring aprogram's locality by prefetching into the L2 cache the data a program wasusing before it was swapped out. A Global History List is used to record aprocess' L2 read accesses in LRU order. These accesses are saved alongwith the process' context when the process is swapped out and loaded toguide prefetching when it is swapped in. We also propose a feedbackmechanism that greatly reduces memory traffic incurred by our prefetchingscheme. A phase guided prefetching scheme was also proposed to complement GHLprefetching. Experiments show significant speedup over baseline architectureswith and without traditional prefetching in the presence of frequentcontext switches.
【 预 览 】
附件列表
Files
Size
Format
View
Extending Data Prefetching to Cope with Context Switch Misses