期刊论文详细信息
Frontiers in Neuroscience
SPIDEN: deep Spiking Neural Networks for efficient image denoising
Neuroscience
Benoît Miramond1  Alain Pegatoquet1  Andrea Castagnetti2 
[1] Université Côte d'Azur, CNRS, LEAT, Sophia Antipolis, France;null;
关键词: denoising;    Spiking Neural Networks;    quantization error;    low latency;    sparsity;    direct training;    energy consumption;   
DOI  :  10.3389/fnins.2023.1224457
 received in 2023-05-17, accepted in 2023-07-27,  发布年份 2023
来源: Frontiers
PDF
【 摘 要 】

In recent years, Deep Convolutional Neural Networks (DCNNs) have outreached the performance of classical algorithms for image restoration tasks. However, most of these methods are not suited for computational efficiency. In this work, we investigate Spiking Neural Networks (SNNs) for the specific and uncovered case of image denoising, with the goal of reaching the performance of conventional DCNN while reducing the computational cost. This task is challenging for two reasons. First, as denoising is a regression task, the network has to predict a continuous value (i.e., the noise amplitude) for each pixel of the image, with high precision. Moreover, state of the art results have been obtained with deep networks that are notably difficult to train in the spiking domain. To overcome these issues, we propose a formal analysis of the information conversion processing carried out by the Integrate and Fire (IF) spiking neurons and we formalize the trade-off between conversion error and activation sparsity in SNNs. We then propose, for the first time, an image denoising solution based on SNNs. The SNN networks are trained directly in the spike domain using surrogate gradient learning and backpropagation through time. Experimental results show that the proposed SNN provides a level of performance close to the state of the art with CNN based solutions. Specifically, our SNN achieves 30.18 dB of signal-to-noise ratio on the Set12 dataset, which is only 0.25 dB below the performance of the equivalent DCNN. Moreover we show that this performance can be achieved with low latency, i.e., using few timesteps, and with a significant level of sparsity. Finally, we analyze the energy consumption for different network latencies and network sizes. We show that the energy consumption of SNNs increases with longer latencies, making them more energy efficient compared to CNNs only for very small inference latencies. However, we also show that by increasing the network size, SNNs can provide competitive denoising performance while reducing the energy consumption by 20%.

【 授权许可】

Unknown   
Copyright © 2023 Castagnetti, Pegatoquet and Miramond.

【 预 览 】
附件列表
Files Size Format View
RO202310101960627ZK.pdf 2689KB PDF download
  文献评价指标  
  下载次数:8次 浏览次数:0次