The goal of this thesis is threefold. First, it attempts to gauge the performance of different computational engineering libraries on different platforms. Given a numerical computing library, we compare the performance between all supported backends running same benchmark. The benchmarks run were the reduction, sorting, prefix scanning and SAXPY operation of vectors ranging from 10M to 25M elements in size. The second part consists of the use of profiling tools to understand code performance and the underlying hardware - software interplay. Finally, we discuss a performance tracking infrastructure that is instrumental in carrying out the process of benchmarking and analyzing the results in a reproducible manner. We describe this infrastructure in terms of its source control management, automation with makefiles and python scripts, and the use of a relational database management system that combine to enable the user to find out expeditiously performance metrics of various libraries on multiple architectures.
【 预 览 】
附件列表
Files
Size
Format
View
Assessing the Performance of Computational Engineering Codes