期刊论文详细信息
Diagnostic and Prognostic Research
Adaptive sample size determination for the development of clinical prediction models
Ewout W. Steyerberg1  Evangelia Christodoulou2  Ben Van Calster3  Michael Edlinger4  Dirk Timmerman5  Maarten van Smeden6  Maria Wanitschek7 
[1] Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, Netherlands;Department of Development & Regeneration, KU Leuven, Leuven, Belgium;Department of Development & Regeneration, KU Leuven, Leuven, Belgium;Department of Biomedical Data Sciences, Leiden University Medical Center, Leiden, Netherlands;EPI-centre, KU Leuven, Leuven, Belgium;Department of Development & Regeneration, KU Leuven, Leuven, Belgium;Department of Medical Statistics, Informatics, and Health Economics, Medical University Innsbruck, Innsbruck, Austria;Department of Development & Regeneration, KU Leuven, Leuven, Belgium;Department of Obstetrics and Gynecology, University Hospitals Leuven, Leuven, Belgium;Julius Center for Health Sciences and Primary Care, University Medical Center Utrecht, Utrecht, Netherlands;University Clinic of Internal Medicine III - Cardiology and Angiology, Tirol Kliniken, Innsbruck, Austria;
关键词: Adaptive design;    Clinical prediction models;    Events per variable;    Model development;    Model validation;    Sample size;   
DOI  :  10.1186/s41512-021-00096-5
来源: Springer
PDF
【 摘 要 】

BackgroundWe suggest an adaptive sample size calculation method for developing clinical prediction models, in which model performance is monitored sequentially as new data comes in.MethodsWe illustrate the approach using data for the diagnosis of ovarian cancer (n = 5914, 33% event fraction) and obstructive coronary artery disease (CAD; n = 4888, 44% event fraction). We used logistic regression to develop a prediction model consisting only of a priori selected predictors and assumed linear relations for continuous predictors. We mimicked prospective patient recruitment by developing the model on 100 randomly selected patients, and we used bootstrapping to internally validate the model. We sequentially added 50 random new patients until we reached a sample size of 3000 and re-estimated model performance at each step. We examined the required sample size for satisfying the following stopping rule: obtaining a calibration slope ≥ 0.9 and optimism in the c-statistic (or AUC) < = 0.02 at two consecutive sample sizes. This procedure was repeated 500 times. We also investigated the impact of alternative modeling strategies: modeling nonlinear relations for continuous predictors and correcting for bias on the model estimates (Firth’s correction).ResultsBetter discrimination was achieved in the ovarian cancer data (c-statistic 0.9 with 7 predictors) than in the CAD data (c-statistic 0.7 with 11 predictors). Adequate calibration and limited optimism in discrimination was achieved after a median of 450 patients (interquartile range 450–500) for the ovarian cancer data (22 events per parameter (EPP), 20–24) and 850 patients (750–900) for the CAD data (33 EPP, 30–35). A stricter criterion, requiring AUC optimism < = 0.01, was met with a median of 500 (23 EPP) and 1500 (59 EPP) patients, respectively. These sample sizes were much higher than the well-known 10 EPP rule of thumb and slightly higher than a recently published fixed sample size calculation method by Riley et al. Higher sample sizes were required when nonlinear relationships were modeled, and lower sample sizes when Firth’s correction was used.ConclusionsAdaptive sample size determination can be a useful supplement to fixed a priori sample size calculations, because it allows to tailor the sample size to the specific prediction modeling context in a dynamic fashion.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO202107029794444ZK.pdf 1643KB PDF download
  文献评价指标  
  下载次数:0次 浏览次数:1次