| Data in Brief | |
| Performance data of multiple-precision scalar and vector BLAS operations on CPU and GPU | |
| Konstantin Isupov1  | |
| [1] Corresponding author.; | |
| 关键词: Multiple-precision arithmetic; Floating-point computations; Graphics processing units; CUDA; BLAS; | |
| DOI : | |
| 来源: DOAJ | |
【 摘 要 】
Many optimized linear algebra packages support the single- and double-precision floating-point data types. However, there are a number of important applications that require a higher level of precision, up to hundreds or even thousands of digits. This article presents performance data of four dense basic linear algebra subprograms – ASUM, DOT, SCAL, and AXPY – implemented using existing extended-/multiple-precision software for conventional central processing units and CUDA compatible graphics processing units. The following open source packages are considered: MPFR, MPDECIMAL, ARPREC, MPACK, XBLAS, GARPREC, CAMPARY, CUMP, and MPRES-BLAS. The execution time of CPU and GPU implementations is measured at a fixed problem size and various levels of numeric precision. The data in this article are related to the research article entitled “Design and implementation of multiple-precision BLAS Level 1 functions for graphics processing units” [1].
【 授权许可】
Unknown