期刊论文详细信息
Frontiers in Artificial Intelligence
Tuning Fairness by Balancing Target Labels
Novi Quadrianto1  Thomas Kehrenberg2  Zexun Chen2 
[1] National Research University Higher School of Economics, Moscow, Russia;Predictive Analytics Lab (PAL), Informatics, University of Sussex, Brighton, United Kingdom;
关键词: algorithmic bias;    fairness;    machine learning;    demographic parity;    equality of opportunity;   
DOI  :  10.3389/frai.2020.00033
来源: DOAJ
【 摘 要 】

The issue of fairness in machine learning models has recently attracted a lot of attention as ensuring it will ensure continued confidence of the general public in the deployment of machine learning systems. We focus on mitigating the harm incurred by a biased machine learning system that offers better outputs (e.g., loans, job interviews) for certain groups than for others. We show that bias in the output can naturally be controlled in probabilistic models by introducing a latent target output. This formulation has several advantages: first, it is a unified framework for several notions of group fairness such as Demographic Parity and Equality of Opportunity; second, it is expressed as a marginalization instead of a constrained problem; and third, it allows the encoding of our knowledge of what unbiased outputs should be. Practically, the second allows us to avoid unstable constrained optimization procedures and to reuse off-the-shelf toolboxes. The latter translates to the ability to control the level of fairness by directly varying fairness target rates. In contrast, existing approaches rely on intermediate, arguably unintuitive, control parameters such as covariance thresholds.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次