期刊论文详细信息
Frontiers in Neuroscience
Neuromorphic Hardware Learns to Learn
Wolfgang Maass1  Thomas Bohnstingl1  Franz Scherr1  Karlheinz Meier2  Christian Pehle2 
[1] Institute for Theoretical Computer Science, Graz University of Technology, Graz, Austria;Kirchhoff-Institute for Physics, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany;
关键词: spiking neural networks;    learning-to-learn;    markov decision processes;    multi-armed bandits;    neuromorphic hardware;    HICANN-DLS;   
DOI  :  10.3389/fnins.2019.00483
来源: DOAJ
【 摘 要 】

Hyperparameters and learning algorithms for neuromorphic hardware are usually chosen by hand to suit a particular task. In contrast, networks of neurons in the brain were optimized through extensive evolutionary and developmental processes to work well on a range of computing and learning tasks. Occasionally this process has been emulated through genetic algorithms, but these require themselves hand-design of their details and tend to provide a limited range of improvements. We employ instead other powerful gradient-free optimization tools, such as cross-entropy methods and evolutionary strategies, in order to port the function of biological optimization processes to neuromorphic hardware. As an example, we show these optimization algorithms enable neuromorphic agents to learn very efficiently from rewards. In particular, meta-plasticity, i.e., the optimization of the learning rule which they use, substantially enhances reward-based learning capability of the hardware. In addition, we demonstrate for the first time Learning-to-Learn benefits from such hardware, in particular, the capability to extract abstract knowledge from prior learning experiences that speeds up the learning of new but related tasks. Learning-to-Learn is especially suited for accelerated neuromorphic hardware, since it makes it feasible to carry out the required very large number of network computations.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次