期刊论文详细信息
IEEE Access 卷:7
Learning Sparse Convolutional Neural Network via Quantization With Low Rank Regularization
Zongcheng Ben1  Dianle Zhou2  Xiangrong Zeng3  Yan Liu3  Maojun Zhang3  Xin Long3 
[1] School of Computer, National University of Defense Technology, Changsha, China;
[2] School of Intelligent Science, National University of Defense Technology, Changsha, China;
[3] School of Systems Engineering, National University of Defense Technology, Changsha, China;
关键词: Convolutional neural network (CNN);    weight quantization;    spectral regularization;    sparsity;    visualization;    channel pruning;   
DOI  :  10.1109/ACCESS.2019.2911536
来源: DOAJ
【 摘 要 】

With the refinement of tasks in artificial intelligence, bringing in exponential level increments in computation cost and storage. Therefore, the augment of computation resource for complicated neural networks severely hinders their applications on limited-power devices in recent years. As a result, there is an impending necessity to compress and accelerate the deep networks by special ways. Considering the different peculiarities of weight quantization and sparse regularization, in this paper, we propose a low rank sparse quantization (LRSQ) method to quantize network weights and regularize the corresponding structures at the same time. Our LRSQ can: 1) obtain low-bit quantized networks to reduce memory and computation cost and 2) learn a compact structure from complex convolutional networks for subsequent channel pruning which has significant reduction on FLOPs. In experimental sections, we evaluate the proposed method on several popular models such as VGG-7/16/19 and ResNet-18/34/50, and results show that this method can dramatically reduce parameters and channels of the network with slight inference accuracy loss. Furthermore, we also visualize and analyze the four-dimensional weight tensors, which shows the low rank and group-sparsity structure of it. Finally, we try pruning unimportant channels which are zero-channels in our quantized model, and finding even a little better precision than the standard full-precision network.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次