科技报告详细信息
Apples-to-Apples in Cross-Validation Studies: Pitfalls in Classifier Performance Measurement
Forman, George ; Scholz, Martin
HP Development Company
关键词: AUC;    F-measure;    machine learning;    ten-fold cross- validation;    classification performance measurement;    high class imbalance;    class skew;    experiment protocol;   
RP-ID  :  HPL-2009-359
学科分类:计算机科学(综合)
美国|英语
来源: HP Labs
PDF
【 摘 要 】

Cross-validation is a mainstay for measuring performance and progress in machine learning. There are subtle differences in how exactly to compute accuracy, F-measure and Area Under the ROC Curve (AUC) in cross-validation studies. However, these details are not discussed in the literature, and incompatible methods are used by various papers and software packages. This leads to inconsistency across the research literature. Anomalies in performance calculations for particular folds and situations go undiscovered when they are buried in aggregated results over many folds and datasets, without ever a person looking at the intermediate performance measurements. This research note clarifies and illustrates the differences, and it provides guidance for how best to measure classification performance under cross-validation. In particular, there are several divergent methods used for computing F- measure, which is often recommended as a performance measure under class imbalance, e.g., for text classification domains and in one-vs.-all reductions of datasets having many classes. We show by experiment that all but one of these computation methods leads to biased measurements, especially under high class imbalance. This paper is of particular interest to those designing machine learning software libraries and researchers focused on high class imbalance.

【 预 览 】
附件列表
Files Size Format View
RO201804100002485LZ 289KB PDF download
  文献评价指标  
  下载次数:22次 浏览次数:32次