期刊论文详细信息
NEUROCOMPUTING 卷:461
HEMP: High-order entropy minimization for neural network compression
Article
Tartaglione, Enzo1  Lathuiliere, Stephane2  Fiandrotti, Attilio1,2  Cagnazzo, Marco2  Grangetto, Marco1 
[1] Univ Torino, Turin, Italy
[2] Telecom Paris, Paris, France
关键词: Deep learning;    Compression;    Entropy;    Neural networks;    Regularization;   
DOI  :  10.1016/j.neucom.2021.07.022
来源: Elsevier
PDF
【 摘 要 】

We formulate the entropy of a quantized artificial neural network as a differentiable function that can be plugged as a regularization term into the cost function minimized by gradient descent. Our formulation scales efficiently beyond the first order and is agnostic of the quantization scheme. The network can then be trained to minimize the entropy of the quantized parameters, so that they can be optimally compressed via entropy coding. We experiment with our entropy formulation at quantizing and compressing well-known network architectures over multiple datasets. Our approach compares favorably over similar methods, enjoying the benefits of higher order entropy estimate, showing flexibility towards non-uniform quantization (we use Lloyd-max quantization), scalability towards any entropy order to be minimized and efficiency in terms of compression. We show that HEMP is able to work in synergy with other approaches aiming at pruning or quantizing the model itself, delivering significant benefits in terms of storage size compressibility without harming the model's performance. (C) 2021 Elsevier B.V. All rights reserved.

【 授权许可】

Free   

【 预 览 】
附件列表
Files Size Format View
10_1016_j_neucom_2021_07_022.pdf 1108KB PDF download
  文献评价指标  
  下载次数:2次 浏览次数:0次