期刊论文详细信息
BMC Medical Informatics and Decision Making
An empirical analysis of dealing with patients who are lost to follow-up when developing prognostic models using a cohort design
Peter Rijnbeek1  Jenna M. Reps2  Patrick B. Ryan2  Martijn Schuemie2  Nicole Pratt3  Alana Cuthbert4 
[1] Department of Medical Informatics, Erasmus University Medical Center, Rotterdam, The Netherlands;Janssen Research and Development, Titusville, NJ, USA;Quality Use of Medicines and Pharmacy Research Centre, Sansom Institute, School of Pharmacy and Medical Sciences, University of South Australia, Adelaide, SA, Australia;South Australian Health and Medical Research Institute (SAHMRI), Adelaide, SA, Australia;
关键词: Prognostic model;    Loss to follow-up;    Censoring;    PatientLevelPrediction;    Best practices;    Model development;   
DOI  :  10.1186/s12911-021-01408-x
来源: Springer
PDF
【 摘 要 】

BackgroundResearchers developing prediction models are faced with numerous design choices that may impact model performance. One key decision is how to include patients who are lost to follow-up. In this paper we perform a large-scale empirical evaluation investigating the impact of this decision. In addition, we aim to provide guidelines for how to deal with loss to follow-up.MethodsWe generate a partially synthetic dataset with complete follow-up and simulate loss to follow-up based either on random selection or on selection based on comorbidity. In addition to our synthetic data study we investigate 21 real-world data prediction problems. We compare four simple strategies for developing models when using a cohort design that encounters loss to follow-up. Three strategies employ a binary classifier with data that: (1) include all patients (including those lost to follow-up), (2) exclude all patients lost to follow-up or (3) only exclude patients lost to follow-up who do not have the outcome before being lost to follow-up. The fourth strategy uses a survival model with data that include all patients. We empirically evaluate the discrimination and calibration performance.ResultsThe partially synthetic data study results show that excluding patients who are lost to follow-up can introduce bias when loss to follow-up is common and does not occur at random. However, when loss to follow-up was completely at random, the choice of addressing it had negligible impact on model discrimination performance. Our empirical real-world data results showed that the four design choices investigated to deal with loss to follow-up resulted in comparable performance when the time-at-risk was 1-year but demonstrated differential bias when we looked into 3-year time-at-risk. Removing patients who are lost to follow-up before experiencing the outcome but keeping patients who are lost to follow-up after the outcome can bias a model and should be avoided.ConclusionBased on this study we therefore recommend (1) developing models using data that includes patients that are lost to follow-up and (2) evaluate the discrimination and calibration of models twice: on a test set including patients lost to follow-up and a test set excluding patients lost to follow-up.

【 授权许可】

CC BY   

【 预 览 】
附件列表
Files Size Format View
RO202106282379768ZK.pdf 2446KB PDF download
  文献评价指标  
  下载次数:3次 浏览次数:1次