会议论文详细信息
Quality issues, measures of interestingness and evaluation of data mining models Workshop
A framework for monitoring classifiers performance: when and why failure occurs
图书情报档案学;计算机科学
Nitesh V. Chawla
PID  :  84237
来源: CEUR
PDF
【 摘 要 】

Classifier error is the product of model bias and data variance. While understanding the bias involved when selecting a given learn ing algorithm, it is similarly important to understand the variability in data over time, since even the One True Model might perform poorly when training and evaluation samples diverge. Thus, the ability to iden tify distributional divergence is critical towards pinpointing when frac ture points in classifier performance will occur. Contemporary evaluation methods do not take the impact of distribution shifts on the quality of classifiers predictions. In this talk, I present a comprehensive framework to proactively detect breakpoints in classifiers predictions and shifts in data distributions through a series of statistical tests. I outline and utilize three scenarios under which data changes: sample selection bias, covari ate shift, and shifting class priors.

【 预 览 】
附件列表
Files Size Format View
A framework for monitoring classifiers performance: when and why failure occurs 77KB PDF download
  文献评价指标  
  下载次数:2次 浏览次数:42次