期刊论文详细信息
BMC Research Notes
Estimating misclassification error: a closer look at cross-validation based methods
Ralph L Kodell1  Horace J Spencer1  Shelly Y Lensing1  Songthip Ounpraseuth1 
[1] Department of Biostatistics, University of Arkansas for Medical Sciences, 4301 W. Markham St. Slot 781, Little Rock, AR, 72205, USA
关键词: Mean Squared Error;    Classification Error Estimation;    Bootstrap Cross-validation;    Cross-validation;   
Others  :  1165115
DOI  :  10.1186/1756-0500-5-656
 received in 2012-07-25, accepted in 2012-11-20,  发布年份 2012
PDF
【 摘 要 】

Background

To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV) methods based on sampling without replacement. Monte Carlo (MC) simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV) to k-fold CV for estimating clasification error.

Findings

For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias.

Conclusions

We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance.

【 授权许可】

   
2012 Ounpraseuth et al.; licensee BioMed Central Ltd.

【 预 览 】
附件列表
Files Size Format View
20150416024144184.pdf 572KB PDF download
Figure 5. 43KB Image download
Figure 4. 43KB Image download
Figure 3. 42KB Image download
Figure 2. 39KB Image download
Figure 1. 45KB Image download
【 图 表 】

Figure 1.

Figure 2.

Figure 3.

Figure 4.

Figure 5.

【 参考文献 】
  • [1]Moon H, Ahn H, Kodell RL, Baek S, Lin C-J, Chen JJ: Ensemble methods for classification of patients for personalized medicine with high-dimensional data. Artif Intell Med 2007, 41:197-207.
  • [2]Hastie T, Tibshirani R, Friedman J: The Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer, New York; 2001.
  • [3]Ambroise C, McLachlan GJ: Selection bias in gene extraction on the basis of microarray gene-expression data. Proc Natl Acad Sci 2002, 99:6562-6566.
  • [4]Subramanian J, Simon R: An evaluation of resampling methods for assessment of survival risk prediction in high-dimensional settings. Stat Med 2011, 30:642-653.
  • [5]Liu Y, Yao X, Higuchi T: Evolutionary ensembles with negative correlation learning. IEEE Trans Evol Comput 2000, 4:380-387.
  • [6]Arena VC, Sussman NB, Mazumdar S, Yu S, Macina OT: The utility of structure-activity relationship (SAR) models for prediction and covariate selection in developmental toxicity: comparative analysis of logistic regression and decision tree models. SAR QSAR Environ Res 2004, 15:1-18.
  • [7]Molinaro AM, Simon R, Pfeiffer RM: Prediction error estimation: a comparison of resampling methods. Bioinformatics 2005, 21:3301-3307.
  • [8]Efron B: Estimating the error rate of a prediction rule: improvement on cross-validation. J Amer Stat Assoc 1983, 78:316-331.
  • [9]Efron B, Tibshirani R: Improvements on cross-validation: the .632+ Bootstrap method. J Amer Stat Assoc 1997, 92:548-560.
  • [10]Fu WJ, Carroll RJ, Wang S: Estimating misclassification error with small samples via bootstrap cross-validation. Bioinformatics 2005, 21:1979-1986.
  • [11]Kim J-H: Estimating classification error rate: repeated cross-validation, repeated hold-out and bootstrap. Comput Stat Data Anal 2009, 53:3735-3745.
  • [12]Friedman J: Regularized discriminant analysis. J Amer Stat Assoc 1989, 84:165-175.
  • [13]R Core Development Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria; 2007. http://www.R-project.org, webcite accessed November 2, 2007
  • [14]SAS: SAS/IML 9.1 User’s Guide. SAS Institute, Inc, Cary, North Carolina; 2004.
  • [15]Davison AC, Hall P: On the bias and variability of bootstrap and cross-validation estimates of error rate in discrimination problems. Biometrika 1992, 79(2):279-284.
  • [16]Efron B, Tibshirani R: An Introduction to the Bootstrap. Chapman & Hall/CRC, Boca Raton, Florida; 1993.
  • [17]Van’t Veer LJ, Dai H, Van de Vijver MJ, He YD, Hart AA, Mao M, et al.: Gene expression profiling predicts clinical outcome of breast cancer. Nature 2002, 415:530-536.
  • [18]Van de Vijver MJ, He YD, et al.: A gene-expression signature as a predictor of survival in breast cancer. N Engl J Med 2002, 347:1999-2009.
  • [19]Nguyen VD, Rocke MD: Tumor classification by partial least squares using microarray gene expression data. Bioinformatics 2002, 18:39-50.
  文献评价指标  
  下载次数:35次 浏览次数:1次