| BMC Psychiatry | |
| The reproducibility of psychiatric evaluations of work disability: two reliability and agreement studies | |
| Jason W. Busse1  Wout E. L. de Boer2  Nicole Vogel2  Monica Bachmann2  Thomas Zumbrunn2  Regina Kunz2  David Y. von Allmen2  Etienne Colomb3  Katrin Fischer4  Heinz J. Schaad5  Joerg Jeger6  Martin Eichhorn7  Ulrike Hoffmann-Richter8  Renato Marelli9  Ralph Mager9  Oskar Bänziger1,10  | |
| [1] Department of Anaesthesia, McMaster University;Department of Clinical Research, Evidence-based Insurance Medicine, University of Basel, University Hospital;French-Speaking Swiss Association of Practitioners in Medical Expertise (ARPEM);Institute Humans in Complex Systems, School of Applied Psychology, University of Applied Sciences Northwestern Switzerland;Institute for Medical Disability Evaluation Interlaken;Institute of Medical Disability Evaluations of Central Switzerland;Private Practice for Psychiatry;Swiss National Accident Insurance Funds;Swiss Society of Insurance Psychiatry, SGVP;Zuerich Office of the Swiss National Disability Insurance; | |
| 关键词: Disability evaluation; Work capacity evaluation; Return to work; Social security; Reproducibility of results; Observer variation; | |
| DOI : 10.1186/s12888-019-2171-y | |
| 来源: DOAJ | |
【 摘 要 】
Abstract Background Expert psychiatrists conducting work disability evaluations often disagree on work capacity (WC) when assessing the same patient. More structured and standardised evaluations focusing on function could improve agreement. The RELY studies aimed to establish the inter-rater reproducibility (reliability and agreement) of ‘functional evaluations’ in patients with mental disorders applying for disability benefits and to compare the effect of limited versus intensive expert training on reproducibility. Methods We performed two multi-centre reproducibility studies on standardised functional WC evaluation (RELY 1 and 2). Trained psychiatrists interviewed 30 and 40 patients respectively and determined WC using the Instrument for Functional Assessment in Psychiatry (IFAP). Three psychiatrists per patient estimated WC from videotaped evaluations. We analysed reliability (intraclass correlation coefficients [ICC]) and agreement (‘standard error of measurement’ [SEM] and proportions of comparisons within prespecified limits) between expert evaluations of WC. Our primary outcome was WC in alternative work (WCalternative.work), 100–0%. Secondary outcomes were WC in last job (WClast.job), 100–0%; patients’ perceived fairness of the evaluation, 10–0, higher is better; usefulness to psychiatrists. Results Inter-rater reliability for WCalternative.work was fair in RELY 1 (ICC 0.43; 95%CI 0.22–0.60) and RELY 2 (ICC 0.44; 0.25–0.59). Agreement was low in both studies, the ‘standard error of measurement’ for WCalternative.work was 24.6 percentage points (20.9–28.4) and 19.4 (16.9–22.0) respectively. Using a ‘maximum acceptable difference’ of 25 percentage points WCalternative.work between two experts, 61.6% of comparisons in RELY 1, and 73.6% of comparisons in RELY 2 fell within these limits. Post-hoc secondary analysis for RELY 2 versus RELY 1 showed a significant change in SEMalternative.work (− 5.2 percentage points WCalternative.work [95%CI − 9.7 to − 0.6]), and in the proportions on the differences ≤ 25 percentage points WCalternative.work between two experts (p = 0.008). Patients perceived the functional evaluation as fair (RELY 1: mean 8.0; RELY 2: 9.4), psychiatrists as useful. Conclusions Evidence from non-randomised studies suggests that intensive training in functional evaluation may increase agreement on WC between experts, but fell short to reach stakeholders’ expectations. It did not alter reliability. Isolated efforts in training psychiatrists may not suffice to reach the expected level of agreement. A societal discussion about achievable goals and readiness to consider procedural changes in WC evaluations may deserve considerations.
【 授权许可】
Unknown