期刊论文详细信息
BMC Health Services Research
An instrument for quality assurance in work capacity evaluation: development, evaluation, and inter-rater reliability
Heiner Vogel1  Christian Gerlich1  André Strahl1  Georg W. Alpers2  Jörg Gehrke3  Annette Müller-Garnn3 
[1] Department of Medical Psychology, Medical Sociology, and Rehabilitation Sciences, University of Wuerzburg;Department of Psychology, School of Social Sciences, University of Mannheim;Department of Social Medicine, German Statutory Pension Insurance;
关键词: Work capacity evaluation;    Insurance medicine;    Quality assurance;    Peer review;    Reliability;   
DOI  :  10.1186/s12913-019-4387-4
来源: DOAJ
【 摘 要 】

Abstract Background Employees insured in pension insurance, who are incapable of working due to ill health, are entitled to a disability pension. To assess whether an individual meets the medical requirements to be considered as disabled, a work capacity evaluation is conducted. However, there are no official guidelines on how to perform an external quality assurance for this evaluation process. Furthermore, the quality of medical reports in the field of insurance medicine can vary substantially, and systematic evaluations are scarce. Reliability studies using peer review have repeatedly shown insufficient ability to distinguish between high, moderate and low quality. Considering literature recommendations, we developed an instrument to examine the quality of medical experts’ reports. Methods The peer review manual developed contains six quality domains (formal structure, clarity, transparency, completeness, medical-scientific principles, and efficiency) comprising 22 items. In addition, a superordinate criterion (survey confirmability) rank the overall quality and usefulness of a report. This criterion evaluates problems of inner logic and reasoning. Development of the manual was assisted by experienced physicians in a pre-test. We examined the observable variance in peer judgements and reliability as the most important outcome criteria. To evaluate inter-rater reliability, 20 anonymous experts’ reports detailing the work capacity evaluation were reviewed by 19 trained raters (peers). Percentage agreement and Kendall’s W, a reliability measure of concordance between two or more peers, were calculated. A total of 325 reviews were conducted. Results Agreement of peer judgements with respect to the superordinate criterion ranged from 29.2 to 87.5%. Kendall’s W for the quality domain items varied greatly, ranging from 0.09 to 0.88. With respect to the superordinate criterion, Kendall’s W was 0.39, which indicates fair agreement. The results of the percentage agreement revealed systemic peer preferences for certain deficit scale categories. Conclusion The superordinate criterion was not sufficiently reliable. However, in comparison to other reliability studies, this criterion showed an equivalent reliability value. This report aims to encourage further efforts to improve evaluation instruments. To reduce disagreement between peer judgments, we propose the revision of the peer review instrument and the development and implementation of a standardized rater training to improve reliability.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次