期刊论文详细信息
Frontiers in Computational Neuroscience
Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception
Neuroscience
Jorge F. Mejias1  Cyriel M. A. Pennartz1  Matthias Brucklacher1  Sander M. Bohté2 
[1]Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
[2]Cognitive and Systems Neuroscience Group, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam, Netherlands
[3]Machine Learning Group, Centrum Wiskunde & Informatica, Amsterdam, Netherlands
关键词: self-supervised learning;    predictive coding;    generative model;    vision;    hierarchy;    representation learning;    Hebbian learning;    video;   
DOI  :  10.3389/fncom.2023.1207361
 received in 2023-04-17, accepted in 2023-08-31,  发布年份 2023
来源: Frontiers
PDF
【 摘 要 】
The ventral visual processing hierarchy of the cortex needs to fulfill at least two key functions: perceived objects must be mapped to high-level representations invariantly of the precise viewing conditions, and a generative model must be learned that allows, for instance, to fill in occluded information guided by visual experience. Here, we show how a multilayered predictive coding network can learn to recognize objects from the bottom up and to generate specific representations via a top-down pathway through a single learning rule: the local minimization of prediction errors. Trained on sequences of continuously transformed objects, neurons in the highest network area become tuned to object identity invariant of precise position, comparable to inferotemporal neurons in macaques. Drawing on this, the dynamic properties of invariant object representations reproduce experimentally observed hierarchies of timescales from low to high levels of the ventral processing stream. The predicted faster decorrelation of error-neuron activity compared to representation neurons is of relevance for the experimental search for neural correlates of prediction errors. Lastly, the generative capacity of the network is confirmed by reconstructing specific object images, robust to partial occlusion of the inputs. By learning invariance from temporal continuity within a generative model, the approach generalizes the predictive coding framework to dynamic inputs in a more biologically plausible way than self-supervised networks with non-local error-backpropagation. This was achieved simply by shifting the training paradigm to dynamic inputs, with little change in architecture and learning rule from static input-reconstructing Hebbian predictive coding networks.
【 授权许可】

Unknown   
Copyright © 2023 Brucklacher, Bohté, Mejias and Pennartz.

【 预 览 】
附件列表
Files Size Format View
RO202310121995048ZK.pdf 9471KB PDF download
  文献评价指标  
  下载次数:0次 浏览次数:3次