会议论文详细信息
5th Asian Conference on Machine Learning
CoTraining with Insufficient Views
数学科学;计算机科学
Wei Wang wangw@lamda.nju.edu.cn ; National Key Laboratory for Novel Software Technology
PID  :  123091
来源: CEUR
PDF
【 摘 要 】

Cotraining is a famous semisupervised learning paradigm exploiting unlabeled data with two views. Most previous theoretical analyses on cotraining are based on the assumption that each of the views is sufficient to correctly predict the label. However, this assumption can hardly be met in real applications due to feature corruption or various feature noise. In this paper, we present the theoretical analysis on cotraining when neither view is sufficient. We define the diversity between the two views with respect to the confidence of prediction and prove that if the two views have large diversity, cotraining is able to improve the learning performance by exploiting unlabeled data even with insufficient views. We also discuss the relationship between view insufficiency and diversity, and give some implications for understanding of the difference between cotraining and coregularization.

【 预 览 】
附件列表
Files Size Format View
CoTraining with Insufficient Views 306KB PDF download
  文献评价指标  
  下载次数:8次 浏览次数:62次