| NEUROCOMPUTING | 卷:214 |
| Sparse Multi-Modal Topical Coding for Image Annotation | |
| Article | |
| Song, Lingyun1  Luo, Minnan1  Liu, Jun1  Zhang, Lingling1  Qian, Buyue1  Li, Max Haifei2  Zheng, Qinghua1  | |
| [1] Xi An Jiao Tong Univ, Dept Comp Sci & Technol, SPKLSTN Lab, Xian 710049, Peoples R China | |
| [2] Union Univ, Dept Comp Sci, Jackson, TN 38305 USA | |
| 关键词: Topic models; Sparse latent representation; Image annotation; Image retrieval; | |
| DOI : 10.1016/j.neucom.2016.06.005 | |
| 来源: Elsevier | |
PDF
|
|
【 摘 要 】
Image annotation plays a significant role in large scale image understanding, indexing and retrieval. The Probability Topic Models (PTMs) attempt to address this issue by learning latent representations of input samples, and have been shown to be effective by existing studies. Though useful, PTM has some limitations in interpreting the latent representations of images and texts, which if addressed would broaden its applicability. In this paper, we introduce sparsity to PTM to improve the interpretability of the inferred latent representations. Extending the Sparse Topical Coding that originally designed for unimodal documents learning, we propose a non-probabilistic formulation of PTM for automatic image annotation, namely Sparse Multi-Modal Topical Coding. Beyond controlling the sparsity, our model can capture more compact correlations between words and image regions. Empirical results on some benchmark datasets show that our model achieves better performance on automatic image annotation and text-based image retrieval over the baseline models. (C) 2016 Elsevier B.V. All rights reserved.
【 授权许可】
Free
【 预 览 】
| Files | Size | Format | View |
|---|---|---|---|
| 10_1016_j_neucom_2016_06_005.pdf | 6900KB |
PDF