Final Report PetaScale Application Development Analysis Grant Number DE-FG02-04ER25629 | |
Numrich, Robert W. | |
University of Minnesota, Minneapolis, MN | |
关键词: Differential Geometry; Curvilinear Coordinates; Dimensions; Productivity; Computers; | |
DOI : 10.2172/948514 RP-ID : DOE/ER/25629-1 RP-ID : FG02-04ER25629 RP-ID : 948514 |
|
美国|英语 | |
来源: UNT Digital Library | |
【 摘 要 】
The results obtained from this project will fundamentally change the way we look at computer performance analysis. These results are made possible by the precise definition of a consistent system of measurement with a set of primary units designed specifically for computer performance analysis. This system of units, along with their associated dimensions, allows us to apply the methods of dimensional analysis, based on the Pi Theorem, to define scaling and self-similarity relationships. These relationships reveal new insights into experimental results that otherwise seems only vaguely correlated. Applying the method to cache-miss data revealed scaling relationships that were not seen by those who originally collected the data. Applying dimensional analysis to the performance of parallel numerical algorithms revealed that computational force is a unifying concept for understanding the interaction between hardware and software. The efficiency of these algorithms depends, in a very intimate way, on the balance between hardware forces and software forces. Analysis of five different algorithms showed that performance analysis can be reduced to a study of the differential geometry of the efficiency surface. Each algorithm defines a set of curvilinear coordinates, specific to that algorithm, and different machines follow different paths along the surface depending on the difference in balance between hardware forces and software forces. Two machines with the same balance in forces follow the same path and are self-similar. The most important result from the project is the statement of the Principle of Computational Least Action. This principle follows from the identification of a dynamical system underlying computer performance analysis. Instructions in a computer are modeled as a classical system under the influence of computational forces. Each instruction generates kinetic energy during execution, and the sum of the kinetic energy for all instructions produces a kinetic energy spectrum as a function of time. These spectra look very much like the spectra used by chemists to analyze properties of molecules. Large spikes in the spectra reveal events during execution, like cache misses, that limit performance. The area under the kinetic energy spectrum is the computational action generated by the program. This computational action defines a normed metric space that measures the size of a program in terms of its action norm and the distance between programs in terms of the norm of the difference of their action. This same idea can be applied to a set of programmers writing code and leads to a computational action metric that measures programmer productivity. In both cases, experimental evidence suggests that highly efficient programs and highly productive programmers generate the least computational action.
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
948514.pdf | 1376KB | download |