IEEE Access | |
Scalenet: A Convolutional Network to Extract Multi-Scale and Fine-Grained Visual Features | |
Jinpeng Zhang1  Shan Yu1  Jinming Zhang2  Guyue Hu3  Yang Chen4  | |
[1] Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China;State Key Laboratory for Manufacturing Systems Engineering, Xi&x2019;an Jiaotong University, Xi&x2019;an, China; | |
关键词: Image classification; convolutional neural networks; ResNet; deconvolution; | |
DOI : 10.1109/ACCESS.2019.2946425 | |
来源: DOAJ |
【 摘 要 】
Many convolutional neural networks have been proposed for image classification in recent years. Most tend to decrease the plane size of feature maps stage-by-stage, such that the feature maps generated within each stage show the same plane size. This concept governs the design of most classification networks. However, it can also lead to semantic deficiency of high-resolution feature maps as they are always placed in the shallow layers of a network. Here, we propose a novel network architecture, named ScaleNet, which consists of stacked convolution-deconvolution blocks and a multipath residual structure. Unlike most current networks, ScaleNet extracts image features by a cascaded deconstruction-reconstruction process. It can generate scale-variable feature maps within each block and stage, thereby realizing multiscale feature extraction at any depth of the network. Based on the CIFAR-10, CIFAR-100, and ImageNet datasets, ScaleNet demonstrated competitive classification performance compared to state-of-the-art ResNet. In addition, ScaleNet exhibited a powerful ability to capture strong semantic and fine-grained features on its high-resolution feature maps. The code is available at https://github.com/zhjpqq/scalenet.
【 授权许可】
Unknown