学位论文详细信息
Adaptation of the MapReduce programming framework to compute-intensive data-analytics kernels
Multiple Independent Threads on a Heterogeneous Resource\rArchitecture (MITHRA);Partitioned Iterative Convergence (PIC);Plasma;Loop Maximizing;graphics processing unit (GPU);Compute Uni ed Device Architecture (CUDA);MapReduce;Hadoop;Mahout;Haloop;iMapReduce;Spark;Twister
Farivar, Reza
关键词: Multiple Independent Threads on a Heterogeneous Resource\rArchitecture (MITHRA);    Partitioned Iterative Convergence (PIC);    Plasma;    Loop Maximizing;    graphics processing unit (GPU);    Compute Uni ed Device Architecture (CUDA);    MapReduce;    Hadoop;    Mahout;    Haloop;    iMapReduce;    Spark;    Twister;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/42216/Reza_Farivar.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

Compute-intensive data-analytic (CIDA) applications have become a major component of many different business domains, as well as scientific computing applications. These algorithms stem from domains as diverse as web analysis and social networks, machine learning and data mining, text analysis, bio-informatics, astronomy image analysis, business analytics, large scale graph algorithms, image/video processing and recognition, some high performance computing problems, quantitative finance and simulation among others. These computational problems deal with massive data sets, and require performing lots of computation per data element. This thesis presents a vision of CIDA applications programmed in a MapReduce style framework and running on clusters of accelerators. Regardless of the type of accelerator, whether GPUs (NVIDIA or AMD), other manycore architectures (like Intel Larrabee or MIC) or even heterogeneous chips (AMD Fusion or IBM Cell processor), there is a fundamental condition imposed on the software, namely the increased sensitivity to locality. As a result, the common theme in this thesis is to increase the locality in CIDA applications. We report on four research efforts to achieve this goal. The Multiple independent threads on a heterogeneous resource architecture (MITHRA) project integrates Hadoop MapReduce and GPUs together, where the map() functions execute. As a result, by moving the map() functions to GPUs we increase the locality of reference and gain better performance.We have shown that when the MITHRA model is applicable (for instance for Monte Carlo algorithms), each computing node can perform orders of magnitude more work in the same run-time.Then we introduce partitioned iterative convergence (PIC) as an approach to realize iterative algorithms on clusters. We observed that conventional implementations of iterative algorithms using MapReduce are quite inefficient as a result of several factors. Complementary to prior work, we focused on addressing the challenges of high network traffic due to frequent model updates and lack of parallelism across iterations. PIC has two phases. In the first phase, called the best-effort phase, it partitions the problem and runs the sub-problems in individual cluster nodes, where the locality can be exploited better. The results of this phase can be numerically inaccurate (about 3\% based on experimental results), but can be computed much faster. The second phase of PIC, called the top-off phase, runs the original iterative algorithm a few more iterations (starting with the results of the best-effort phase) to compute an accurate answer.Finally we introduce two GPU-based projects that try to increase the performance of MapReduce style functions in GPUs. The first is loop maximizing, a code transformation for GPUs that can eliminate code flow divergence (and hence serialization in GPUs) and result in better usage of GPU processing elements. Using this technique, we have achieved the highest reported speedups for gene alignment algorithms.The second GPU-based project is a library for dynamic shared memory allocation and access in GPUs assuming independent execution of the GPU threads, which happens in a MapReduce style environment among both map() and reduce() functions. The two MapReduce adaptations (MITHRA and PIC), the GPU-based loop-maximizing optimization and the Plasma library together lay the plan for the goal of achieving good performance on locality-sensitive clusters. This thesis shows the feasibility of this approach, and describes how each of these projects contributes to the collective target.

【 预 览 】
附件列表
Files Size Format View
Adaptation of the MapReduce programming framework to compute-intensive data-analytics kernels 1339KB PDF download
  文献评价指标  
  下载次数:13次 浏览次数:14次