期刊论文详细信息
IEEE Access
A Neural Network Model Compression Approach Based on Deep Feature Map Transfer
Yixuan Xu1  Zhibo Guo2  Ying Zhang2  Xin Yao2  Linghao Wang2 
[1] School of Computer Science, University of Nottingham, Nottingham, U.K.;School of Information Engineering, Yangzhou University, Yangzhou, China;
关键词: Knowledge distillation;    machine learning;    model compression;    neural networks;    pattern recognition;    transfer learning;   
DOI  :  10.1109/ACCESS.2020.3019432
来源: DOAJ
【 摘 要 】

Neural network is widely used in computer vision. However, with the continuous expansion of the application field, high-precision large parameter neural network model is difficult to deploy on small equipment with limited resources. In order to obtain a small but efficient network, the soft output of the teacher network was used to train students through the teacher-student structure. A new method of neural network model compression based on deep feature map transfer (DFMT) is proposed in this paper, which uses visual system characteristics adequately. A small decoder is designed in the network to generate a deep feature map from the features extracted by the network, and the feature map is used to transfer knowledge. In addition, cosine similarity is used as the evaluation index of knowledge transfer. A smaller model with better precision can be obtained by the proposed method. Experiments on benchmark datasets prove the validity and advancement of the proposed approach.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次