高级检索
    范卓娅, 孟小峰. 算法公平与公平计算[J]. 计算机研究与发展, 2023, 60(9): 2048-2066. DOI: 10.7544/issn1000-1239.202220625
    引用本文: 范卓娅, 孟小峰. 算法公平与公平计算[J]. 计算机研究与发展, 2023, 60(9): 2048-2066. DOI: 10.7544/issn1000-1239.202220625
    Fan Zhuoya, Meng Xiaofeng. Algorithmic Fairness and Fairness Computing[J]. Journal of Computer Research and Development, 2023, 60(9): 2048-2066. DOI: 10.7544/issn1000-1239.202220625
    Citation: Fan Zhuoya, Meng Xiaofeng. Algorithmic Fairness and Fairness Computing[J]. Journal of Computer Research and Development, 2023, 60(9): 2048-2066. DOI: 10.7544/issn1000-1239.202220625

    算法公平与公平计算

    Algorithmic Fairness and Fairness Computing

    • 摘要: 算法公平问题由来已久,且随着社会变革历程不断花样翻新. 随着数字化转型的加速推进,算法公平问题的根源逐渐由社会偏见转向数据偏见与模型偏见,算法给人带来的剥削变得更为隐蔽,而影响更为深远. 尽管社会科学各领域对公平问题研究已久,但大部分仅停留在定性的表述上. 作为计算机科学与社会科学的交叉问题,数字化转型下的算法公平不仅要继承社会科学各领域的基本理论,更要具备公平计算的方法与能力. 由此,从算法公平的内涵出发,从社会偏见、数据偏见与模型偏见3个维度总结现有的算法公平计算方法,最后对算法公平指标和公平方法进行实验对比,进而分析算法公平计算面临的挑战. 实验表明,原始模型的公平性与准确性之间存在权衡关系,公平方法的公平性与准确性之间存在一致关系. 在公平指标上,不同公平指标之间的相关性差异较大,这说明了多样的公平指标的重要性. 在公平方法上,单一的公平方法效果有限,这说明了探索公平方法组合的重要性.

       

      Abstract: The problem of algorithmic fairness has a long history, and it has been constantly renovated with the process of social change. With the acceleration of digital transformation, the root cause of algorithmic fairness problem has gradually shifted from social bias to data bias and model bias. Meanwhile, algorithmic exploitation has become more hidden and far-reaching. Although various fields of social science have studied the problem of fairness for a long time, most of them only stay in qualitative expression. As an intersection of computer science and social science, algorithmic fairness under digital transformation should not only inherit the basic theories of various fields of social science, but also provide the methods and capabilities of fairness computing. Therefore, we start with the definition of algorithmic fairness, and summarize the existing algorithmic fairness computing methods from the three dimensions of social bias, data bias and model bias. Finally, we compare algorithmic fairness indicators and methods by experiments, and then analyze the challenges of algorithmic fairness computing. Our experiments show that there is a trade-off relationship between the fairness and accuracy of original models, and there is a consistent relationship between the fairness and accuracy of fairness methods. Regarding fairness indicators, there is a significant difference in the correlation between different fairness indicators, indicating the importance of diverse fairness indicators. Regarding fairness methods, a single fairness method has limited effect, indicating the importance of exploring combinations of fairness methods.

       

    /

    返回文章
    返回