期刊论文详细信息
Frontiers in Neuroscience
Trainable quantization for Speedy Spiking Neural Networks
Neuroscience
Benoît Miramond1  Alain Pegatoquet1  Andrea Castagnetti2 
[1] LEAT, Université Côte d'Azur, CNRS, Sophia Antipolis, France;null;
关键词: Spiking Neural Networks;    quantization error;    low latency;    sparsity;    direct training;   
DOI  :  10.3389/fnins.2023.1154241
 received in 2023-01-30, accepted in 2023-02-14,  发布年份 2023
来源: Frontiers
PDF
【 摘 要 】

Spiking neural networks are considered as the third generation of Artificial Neural Networks. SNNs perform computation using neurons and synapses that communicate using binary and asynchronous signals known as spikes. They have attracted significant research interest over the last years since their computing paradigm allows theoretically sparse and low-power operations. This hypothetical gain, used from the beginning of the neuromorphic research, was however limited by three main factors: the absence of an efficient learning rule competing with the one of classical deep learning, the lack of mature learning framework, and an important data processing latency finally generating energy overhead. While the first two limitations have recently been addressed in the literature, the major problem of latency is not solved yet. Indeed, information is not exchanged instantaneously between spiking neurons but gradually builds up over time as spikes are generated and propagated through the network. This paper focuses on quantization error, one of the main consequence of the SNN discrete representation of information. We argue that the quantization error is the main source of accuracy drop between ANN and SNN. In this article we propose an in-depth characterization of SNN quantization noise. We then propose a end-to-end direct learning approach based on a new trainable spiking neural model. This model allows adapting the threshold of neurons during training and implements efficient quantization strategies. This novel approach better explains the global behavior of SNNs and minimizes the quantization noise during training. The resulting SNN can be trained over a limited amount of timesteps, reducing latency, while beating state of the art accuracy and preserving high sparsity on the main datasets considered in the neuromorphic community.

【 授权许可】

Unknown   
Copyright © 2023 Castagnetti, Pegatoquet and Miramond.

【 预 览 】
附件列表
Files Size Format View
RO202310109734561ZK.pdf 1059KB PDF download
  文献评价指标  
  下载次数:0次 浏览次数:0次