期刊论文详细信息
Applied Sciences 卷:10
Improving Classification Performance of Softmax Loss Function Based on Scalable Batch-Normalization
Wennan Cui1  Tao Zhang1  Zikuang He2  Qiuyu Zhu2 
[1] Key Laboratory of intelligent infrared perception, Chinese Academy of Sciences, Shanghai 200083, China;
[2] School of Communication & Information Engineering, Shanghai University, Shanghai 200444, China;
关键词: convolutional neural network;    loss function;    gradient decent;   
DOI  :  10.3390/app10082950
来源: DOAJ
【 摘 要 】

Convolutional neural networks (CNNs) have made great achievements on computer vision tasks, especially the image classification. With the improvement of network structure and loss functions, the performance of image classification is getting higher and higher. The classic Softmax + cross-entropy loss has been the norm for training neural networks for years, which is calculated from the output probability of the ground-truth class. Then the network’s weight is updated by gradient calculation of the loss. However, after several epochs of training, the back-propagation errors usually become almost negligible. For the above considerations, we proposed that batch normalization with adjustable scale could be added after network output to alleviate the problem of vanishing gradient problem in deep learning. The experimental results show that our method can significantly improve the final classification accuracy on different network structures, and is also better than many other improved classification Loss.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次