BMC Medical Informatics and Decision Making | |
Predicting clinical outcomes among hospitalized COVID-19 patients using both local and published models | |
Kevin Chow1  Karl M. Kochendorfer2  Julian Theis3  Houshang Darabi3  Samuel Harford3  Maryam Pishgar3  Jorge Mario Rodríguez-Fernández4  John Zulueta5  William Galanter6  | |
[1] College of Medicine, UIC, Chicago, USA;Department of Family and Community Medicine, UIC, Chicago, USA;Department of Mechanical and Industrial Engineering, UIC, Chicago, USA;Department of Neurology, Clinical Informatics Fellowship, UIC, Chicago, USA;Department of Psychiatry, UIC, Chicago, USA;Departments of Medicine and Pharmacy Systems, Outcomes and Policy, University of Illinois At Chicago (UIC), Chicago, USA; | |
关键词: Mortality; Hospitalization; COVID-19; Statistical model; Prediction; Model generalizability; | |
DOI : 10.1186/s12911-021-01576-w | |
来源: Springer | |
【 摘 要 】
BackgroundMany models are published which predict outcomes in hospitalized COVID-19 patients. The generalizability of many is unknown. We evaluated the performance of selected models from the literature and our own models to predict outcomes in patients at our institution.MethodsWe searched the literature for models predicting outcomes in inpatients with COVID-19. We produced models of mortality or criticality (mortality or ICU admission) in a development cohort. We tested external models which provided sufficient information and our models using a test cohort of our most recent patients. The performance of models was compared using the area under the receiver operator curve (AUC).ResultsOur literature review yielded 41 papers. Of those, 8 were found to have sufficient documentation and concordance with features available in our cohort to implement in our test cohort. All models were from Chinese patients. One model predicted criticality and seven mortality. Tested against the test cohort, internal models had an AUC of 0.84 (0.74–0.94) for mortality and 0.83 (0.76–0.90) for criticality. The best external model had an AUC of 0.89 (0.82–0.96) using three variables, another an AUC of 0.84 (0.78–0.91) using ten variables. AUC’s ranged from 0.68 to 0.89. On average, models tested were unable to produce predictions in 27% of patients due to missing lab data.ConclusionDespite differences in pandemic timeline, race, and socio-cultural healthcare context some models derived in China performed well. For healthcare organizations considering implementation of an external model, concordance between the features used in the model and features available in their own patients may be important. Analysis of both local and external models should be done to help decide on what prediction method is used to provide clinical decision support to clinicians treating COVID-19 patients as well as what lab tests should be included in order sets.
【 授权许可】
CC BY
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO202108120351981ZK.pdf | 858KB | download |