期刊论文详细信息
Sensors
Data-Efficient Sensor Upgrade Path Using Knowledge Distillation
Bart Diricx1  Jonas De Vylder1  Pieter Van Molle2  Bart Dhoedt2  Tim Verbelen2  Pieter Simoens2  Cedric De Boom2  Bert Vankeirsbilck2 
[1] Barco Healthcare, Barco N.V., 8500 Kortrijk, Belgium;IDLab, Department of Information and Technology, Ghent University, 9052 Gent, Belgium;
关键词: deep learning;    knowledge distillation;    cross-modal distillation;    sensor upgrade;    skin lesion classification;    multispectral imaging;   
DOI  :  10.3390/s21196523
来源: DOAJ
【 摘 要 】

Deep neural networks have achieved state-of-the-art performance in image classification. Due to this success, deep learning is now also being applied to other data modalities such as multispectral images, lidar and radar data. However, successfully training a deep neural network requires a large reddataset. Therefore, transitioning to a new sensor modality (e.g., from regular camera images to multispectral camera images) might result in a drop in performance, due to the limited availability of data in the new modality. This might hinder the adoption rate and time to market for new sensor technologies. In this paper, we present an approach to leverage the knowledge of a teacher network, that was trained using the original data modality, to improve the performance of a student network on a new data modality: a technique known in literature as knowledge distillation. By applying knowledge distillation to the problem of sensor transition, we can greatly speed up this process. We validate this approach using a multimodal version of the MNIST dataset. Especially when little data is available in the new modality (i.e., 10 images), training with additional teacher supervision results in increased performance, with the student network scoring a test set accuracy of 0.77, compared to an accuracy of 0.37 for the baseline. We also explore two extensions to the default method of knowledge distillation, which we evaluate on a multimodal version of the CIFAR-10 dataset: an annealing scheme for the hyperparameter α and selective knowledge distillation. Of these two, the first yields the best results. Choosing the optimal annealing scheme results in an increase in test set accuracy of 6%. Finally, we apply our method to the real-world use case of skin lesion classification.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次