| NEUROCOMPUTING | 卷:147 |
| Self-organizing maps with information theoretic learning | |
| Article | |
| Chalasani, Rakesh1  Principe, Jose C.1  | |
| [1] Univ Florida, Computat NeuroEngn Lab, Gainesville, FL 32608 USA | |
| 关键词: SOM; Kernel methods; Information theoretic learning; Magnification factor; | |
| DOI : 10.1016/j.neucom.2013.12.059 | |
| 来源: Elsevier | |
PDF
|
|
【 摘 要 】
The self-organizing map (SUM) is one of the popular clustering and data visualization algorithms and has evolved as a useful tool in pattern recognition, data mining since it was first introduced by Kohonen. However, it is observed that the magnification factor for such mappings deviates from the informationtheoretically optimal value of 1 (for the SUM it is 2/3). This can be attributed to the Use of the mean square error to adapt the system, which distorts the mapping by oversampling the low probability regions. In this work, we first discuss the kernel SUM in terms of a similarity measure called correntropy induced metric (CIM) and empirically show that this can enhance the magnification of the mapping without much increase in the computational complexity of the algorithm. We also show that adapting the SUM in the CIM sense is equivalent to reducing the localized cross information potential, an informationtheoretic function that quantifies the similarity between two probability distributions. Using this property we propose a kernel bandwidth adaptation algorithm for Gaussian kernels, with both homoscedastic and heteroscedastic components. We show that the proposed model can achieve a mapping with optimal magnification and can automatically adapt the parameters of the kernel function. (C) 2014 Published by Elsevier B.V.
【 授权许可】
Free
【 预 览 】
| Files | Size | Format | View |
|---|---|---|---|
| 10_1016_j_neucom_2013_12_059.pdf | 3127KB |
PDF