BMC Medical Informatics and Decision Making | |
The advanced machine learner XGBoost did not reduce prehospital trauma mistriage compared with logistic regression: a simulation study | |
Johanna Berg1  Martin Gerdin Wärnberg2  Mikael Gellerfors3  Anna Larsson4  | |
[1] Department of Emergency Medicine, Skåne University Hospital Malmö, Inga Marie Nilssons gata 47, 21421, Malmö, Sweden;Department of Global Public Health, Karolinska Institutet, 171 77, Solna, Sweden;Department of Global Public Health, Karolinska Institutet, 171 77, Solna, Sweden;Function Perioperative Medicine and Intensive Care, Karolinska University Hospital, Solna, Stockholm, Sweden;Department of Physiology and Pharmacology, Karolinska Institutet, 171 77, Solna, Sweden;Function Perioperative Medicine and Intensive Care, Karolinska University Hospital, Solna, Stockholm, Sweden;Swedish Air Ambulance (SLA), Mora, Sweden;Rapid Response Cars, Stockholm, Sweden;Emergency Department, Södersjukhuset, Sjukhusbacken 10, 11883, Stockholm, Sweden; | |
关键词: Trauma; Prehospital triage; Undertriage; Overtriage; Clinical prediction model; Machine learning; | |
DOI : 10.1186/s12911-021-01558-y | |
来源: Springer | |
【 摘 要 】
BackgroundAccurate prehospital trauma triage is crucial for identifying critically injured patients and determining the level of care. In the prehospital setting, time and data are often scarce, limiting the complexity of triage models. The aim of this study was to assess whether, compared with logistic regression, the advanced machine learner XGBoost (eXtreme Gradient Boosting) is associated with reduced prehospital trauma mistriage.MethodsWe conducted a simulation study based on data from the US National Trauma Data Bank (NTDB) and the Swedish Trauma Registry (SweTrau). We used categorized systolic blood pressure, respiratory rate, Glasgow Coma Scale and age as our predictors. The outcome was the difference in under- and overtriage rates between the models for different training dataset sizes.ResultsWe used data from 813,567 patients in the NTDB and 30,577 patients in SweTrau. In SweTrau, the smallest training set of 10 events per free parameter was sufficient for model development. XGBoost achieved undertriage rates in the range of 0.314–0.324 with corresponding overtriage rates of 0.319–0.322. Logistic regression achieved undertriage rates ranging from 0.312 to 0.321 with associated overtriage rates ranging from 0.321 to 0.323. In NTDB, XGBoost required the largest training set size of 1000 events per free parameter to achieve robust results, whereas logistic regression achieved stable performance from a training set size of 25 events per free parameter. For the training set size of 1000 events per free parameter, XGBoost obtained an undertriage rate of 0.406 with an overtriage of 0.463. For logistic regression, the corresponding undertriage was 0.395 with an overtriage of 0.468.ConclusionThe under- and overtriage rates associated with the advanced machine learner XGBoost were similar to the rates associated with logistic regression regardless of sample size, but XGBoost required larger training sets to obtain robust results. We do not recommend using XGBoost over logistic regression in this context when predictors are few and categorical.
【 授权许可】
CC BY
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO202107228834937ZK.pdf | 1282KB | download |