学位论文详细信息
Dynamical Systems in Spiking Neuromorphic Hardware
neural engineering;spiking networks;neuromorphics;recurrent neural networks;dynamical systems;temporal representation;nengo;reservoir computing;long short-term memory;force learning;theoretical neuroscience;computational neuroscience;loihi;spinnaker;braindrop
Voelker, Aaron Russellaffiliation1:Faculty of Mathematics ; advisor:Eliasmith, Chris ; Eliasmith, Chris ;
University of Waterloo
关键词: nengo;    loihi;    neural engineering;    spinnaker;    spiking networks;    force learning;    computational neuroscience;    neuromorphics;    Doctoral Thesis;    theoretical neuroscience;    dynamical systems;    reservoir computing;    recurrent neural networks;    braindrop;    temporal representation;    long short-term memory;   
Others  :  https://uwspace.uwaterloo.ca/bitstream/10012/14625/4/Voelker_Aaron.pdf
瑞士|英语
来源: UWSPACE Waterloo Institutional Repository
PDF
【 摘 要 】

Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case.

【 预 览 】
附件列表
Files Size Format View
Dynamical Systems in Spiking Neuromorphic Hardware 20115KB PDF download
  文献评价指标  
  下载次数:96次 浏览次数:36次