期刊论文详细信息
Frontiers in Neuroscience
Efficient training of spiking neural networks with temporally-truncated local backpropagation through time
Neuroscience
Mohammed E. Fouda1  Ahmed M. Eltawil2  Khaled Nabil Salama3  Wenzhe Guo4 
[1] Center for Embedded & Cyber-Physical Systems, University of California, Irvine, Irvine, CA, United States;Communication and Computing Systems Lab, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia;Center for Embedded & Cyber-Physical Systems, University of California, Irvine, Irvine, CA, United States;Sensors Lab, Advanced Membranes and Porous Materials Center (AMPMC), Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia;Sensors Lab, Advanced Membranes and Porous Materials Center (AMPMC), Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia;Communication and Computing Systems Lab, Computer, Electrical and Mathematical Sciences and Engineering (CEMSE) Division, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia;
关键词: backpropagation through time;    deep learning;    energy-efficient training;    local learning;    neuromorphic computing;    spiking neural networks;   
DOI  :  10.3389/fnins.2023.1047008
 received in 2022-09-17, accepted in 2023-03-20,  发布年份 2023
来源: Frontiers
PDF
【 摘 要 】

Directly training spiking neural networks (SNNs) has remained challenging due to complex neural dynamics and intrinsic non-differentiability in firing functions. The well-known backpropagation through time (BPTT) algorithm proposed to train SNNs suffers from large memory footprint and prohibits backward and update unlocking, making it impossible to exploit the potential of locally-supervised training methods. This work proposes an efficient and direct training algorithm for SNNs that integrates a locally-supervised training method with a temporally-truncated BPTT algorithm. The proposed algorithm explores both temporal and spatial locality in BPTT and contributes to significant reduction in computational cost including GPU memory utilization, main memory access and arithmetic operations. We thoroughly explore the design space concerning temporal truncation length and local training block size and benchmark their impact on classification accuracy of different networks running different types of tasks. The results reveal that temporal truncation has a negative effect on the accuracy of classifying frame-based datasets, but leads to improvement in accuracy on event-based datasets. In spite of resulting information loss, local training is capable of alleviating overfitting. The combined effect of temporal truncation and local training can lead to the slowdown of accuracy drop and even improvement in accuracy. In addition, training deep SNNs' models such as AlexNet classifying CIFAR10-DVS dataset leads to 7.26% increase in accuracy, 89.94% reduction in GPU memory, 10.79% reduction in memory access, and 99.64% reduction in MAC operations compared to the standard end-to-end BPTT. Thus, the proposed method has shown high potential to enable fast and energy-efficient on-chip training for real-time learning at the edge.

【 授权许可】

Unknown   
Copyright © 2023 Guo, Fouda, Eltawil and Salama.

【 预 览 】
附件列表
Files Size Format View
RO202310104300746ZK.pdf 3444KB PDF download
  文献评价指标  
  下载次数:1次 浏览次数:1次