期刊论文详细信息
IEEE Access
Training Multi-Bit Quantized and Binarized Networks with a Learnable Symmetric Quantizer
Jacob A. Abraham1  Phuoc Pham2  Jaeyong Chung2 
[1] Computer Engineering Research Center, The University of Texas at Austin, Austin, TX, USA;Department of Electronics Engineering, Incheon National University, Incheon, Republic of Korea;
关键词: Learnable quantizer;    quantization;    binarization;    model compression;    machine learning;    neuromorphic computing;   
DOI  :  10.1109/ACCESS.2021.3067889
来源: DOAJ
【 摘 要 】

Quantizing weights and activations of deep neural networks is essential for deploying them in resource-constrained devices, or cloud platforms for at-scale services. While binarization is a special case of quantization, this extreme case often leads to several training difficulties, and necessitates specialized models and training methods. As a result, recent quantization methods do not provide binarization, thus losing the most resource-efficient option, and quantized and binarized networks have been distinct research areas. We examine binarization difficulties in a quantization framework and find that all we need to enable the binary training are a symmetric quantizer, good initialization, and careful hyperparameter selection. These techniques also lead to substantial improvements in multi-bit quantization. We demonstrate our unified quantization framework, denoted as UniQ, on the ImageNet dataset with various architectures such as ResNet-18,-34 and MobileNetV2. For multi-bit quantization, UniQ outperforms existing methods to achieve the state-of-the-art accuracy. In binarization, the achieved accuracy is comparable to existing state-of-the-art methods even without modifying the original architectures.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次