一种提高SpMV向量化性能的新型稀疏矩阵存储格式
A NEW SPARSE MATRIX STORAGE FORMAT FOR IMPROVING SPMV PERFORMANCE BY SIMD
查看参考文献12篇
文摘
|
稀疏矩阵向量乘(SpMV)是科学与工程计算中一个重要的核心函数,但在当前基于存储器层次结构的计算平台上,传统CSR(Compressed Sparse Row)存储的稀疏矩阵向量乘性能较低,运行效率往往远低于硬件浮点峰值的10%.目前现有的处理器架构一般都采用SIMD向量化技术进行加速,但是传统CSR格式的稀疏矩阵向量乘由于访存的不规则性,不能直接采用向量化技术进行加速,为了利用SIMD技术,对具有局部性特征的稀疏矩阵,提出了新的稀疏矩阵存储格式CSRL(Compressed Sparse Row with Local information),该格式可以减少 SpMV 时内存访问次数,并且能够充分利用硬件的SIMD向量化技术进行读取和计算,提高了 SpMV性能.实验表明,该方法相比国际著名商业库Intel MKL10.3版平均性能提升达到29.5%,最高可达89%的性能提升. |
其他语种文摘
|
Sparse matrix-vector multiplication (SpMV) is an important computational kernel in scientific and engineering applications. The performance of SpMV by using traditional CSR format is often far below 10% of the peak performance on modern processors with memory hierarchy. When using the CSR format for SpMV, it is often hard to directly take advantage of the SIMD acceleration technology on mordern processors, due to irregular memory access pattern. In order to use the SIMD technology, a new storage format for sparse matrices, CSRL (Compressed Sparse Row with Local information), is proposed.The CSRL format has locality characteristic, and is SIMD-friendly. The new format reduces the number of memory access and improves the SpMV performance. Experiments show that, compared with the implementation in Intel MKL library (version 10.3), the SpMV based on the CSRL format gains an average of 29.5% and maximum of 89%performance improvement. |
来源
|
数值计算与计算机应用
,2014,35(4):269-276 【扩展库】
|
关键词
|
稀疏矩阵
;
稀疏矩阵向量乘
;
向量化
;
局部性
;
CSRL
|
地址
|
1.
中国科学院软件研究所, 中国科学院并行软件与计算科学实验室, 北京, 100190
2.
中国科学院软件研究所, 中国科学院并行软件与计算科学实验室;;计算机科学国家重点实验室, 北京, 100190
|
语种
|
中文 |
文献类型
|
研究性论文 |
ISSN
|
1000-3266 |
学科
|
自动化技术、计算机技术 |
基金
|
国家973计划
;
国家863计划
;
国家自然科学基金项目
|
文献收藏号
|
CSCD:5310347
|
参考文献 共
12
共1页
|
1.
Saad Y.
Iterative methods for sparse linear systems,2003
|
CSCD被引
84
次
|
|
|
|
2.
Vuduc R. OSKI: A library of automatically tuned sparse matrix kernels.
Journal of Physics: Conference Series,2005,16(1):521
|
CSCD被引
7
次
|
|
|
|
3.
Willcock J. Accelerating sparse matrix computations via data compression.
Proceedings of the 20th Annual International Conference on Supercomputing,2006:307-316
|
CSCD被引
2
次
|
|
|
|
4.
Kourtis K. Optimizing sparse matrix-vector multiplication using index and value compression.
Proceedings of the 5th conference on Computing Frontiers,2008:87-96
|
CSCD被引
1
次
|
|
|
|
5.
Kourtis K. CSX: an extended compression format for SpMV on shared memory systems.
Proceedings of the 16th ACM Symposium on Principles and Practice of Parallel Programming,2011:247-256
|
CSCD被引
2
次
|
|
|
|
6.
Sun X. CRSD: application specific auto-tuning of SpMV for diagonal sparse matrices.
Euro-Par 2011 Parallel Processing,2011:316-327
|
CSCD被引
1
次
|
|
|
|
7.
Li J. SMAT: an input adaptive auto-tuner for sparse matrix-vector multiplication.
Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation,2013:117-126
|
CSCD被引
1
次
|
|
|
|
8.
Im E J.
Optimizing sparse matrix computations for register reuse in SPARSITY Computational Science—ICCS 2001,2001:127-136
|
CSCD被引
1
次
|
|
|
|
9.
D'Azevedo E F.
Vectorized sparse matrix multiply for compressed row storage format Computational Science—CICCS 2005,2005:99-106
|
CSCD被引
1
次
|
|
|
|
10.
Williams S. Optimization of sparse matrix-vector multiplication on emerging multicore platforms.
Parallel Computing,2009,35(3):178-194
|
CSCD被引
10
次
|
|
|
|
11.
Liu X. Efficient sparse matrix-vector multiplication on x86-based many-core processors.
Proceedings of the 27th ACM International Conference on Supercomputing,2013:273-282
|
CSCD被引
1
次
|
|
|
|
12.
袁娥. SpMV的自动性能优化实现技术及其应用研究.
计算机研究与发展,2009,46(7):1117-1126
|
CSCD被引
13
次
|
|
|
|
|