期刊论文详细信息
Journal of Responsible Technology
Explainability for experts: A design framework for making algorithms supporting expert decisions more explainable
Ewa Luger1  Rhianne Jones2  Auste Simkute3  Michael Evans3  Bronwyn Jones3 
[1] Corresponding author at: The University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, United Kingdom.;BBC Research and Development, Salford, Greater Manchester, United Kingdom;University of Edinburgh, Edinburgh, United Kingdom;
关键词: Explainability;    Decision support systems;    Journalism;    Human-in-the-loop;    Expertise;   
DOI  :  
来源: DOAJ
【 摘 要 】

Algorithmic decision support systems are widely applied in domains ranging from healthcare to journalism. To ensure that these systems are fair and accountable, it is essential that humans can maintain meaningful agency, understand and oversee algorithmic processes. Explainability is often seen as a promising mechanism for enabling human-in-the-loop, however, current approaches are ineffective and can lead to various biases. We argue that explainability should be tailored to support naturalistic decision-making and sensemaking strategies employed by domain experts and novices. Based on cognitive psychology and human factors literature review we map potential decision-making strategies dependant on expertise, risk and time dynamics and propose the conceptual Expertise, Risk and Time Explainability framework, intended to be used as explainability design guidelines. Finally, we present a worked example in journalism to illustrate the applicability of our framework in practice.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:1次