期刊论文详细信息
The Journal of Mathematical Neuroscience
Neurally plausible mechanisms for learning selective and invariant representations
Ankit Patel1  Fabio Anselmi2  Lorenzo Rosasco3 
[1] Center for Neuroscience and Artificial Intelligence Department of Neuroscience, Baylor College of Medicine, Baylor Plaza, 77030, Houston, USA;Department of Electrical & Computer Engineering, Rie University, 6100 Main St., 77005, Houston, USA;Center for Neuroscience and Artificial Intelligence Department of Neuroscience, Baylor College of Medicine, Baylor Plaza, 77030, Houston, USA;Laboratory for Computational and Statistical Learning (LCSL), Istituto Italiano di Tecnologia, Genova, Via Dodecaneso, Genova, Italy;Center for Brains, Minds, and Machines (CBMM), Massachusetts Institute of Technology, 43 Vassar Street, Cambridge, USA;Laboratory for Computational and Statistical Learning (LCSL), Istituto Italiano di Tecnologia, Genova, Via Dodecaneso, Genova, Italy;
关键词: Invariance;    Hebbian learning;    Group theory;   
DOI  :  10.1186/s13408-020-00088-7
来源: Springer
PDF
【 摘 要 】

Coding for visual stimuli in the ventral stream is known to be invariant to object identity preserving nuisance transformations. Indeed, much recent theoretical and experimental work suggests that the main challenge for the visual cortex is to build up such nuisance invariant representations. Recently, artificial convolutional networks have succeeded in both learning such invariant properties and, surprisingly, predicting cortical responses in macaque and mouse visual cortex with unprecedented accuracy. However, some of the key ingredients that enable such success—supervised learning and the backpropagation algorithm—are neurally implausible. This makes it difficult to relate advances in understanding convolutional networks to the brain. In contrast, many of the existing neurally plausible theories of invariant representations in the brain involve unsupervised learning, and have been strongly tied to specific plasticity rules. To close this gap, we study an instantiation of simple-complex cell model and show, for a broad class of unsupervised learning rules (including Hebbian learning), that we can learn object representations that are invariant to nuisance transformations belonging to a finite orthogonal group. These findings may have implications for developing neurally plausible theories and models of how the visual cortex or artificial neural networks build selectivity for discriminating objects and invariance to real-world nuisance transformations.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO202104278199067ZK.pdf 1754KB PDF download
  文献评价指标  
  下载次数:2次 浏览次数:3次