ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2021, Vol. 58 ›› Issue (2): 264-280.doi: 10.7544/issn1000-1239.2021.20200758

所属专题: 2021数据治理与数据透明专题

• 人工智能 • 上一篇    下一篇

面向深度学习的公平性研究综述

陈晋音1,2,陈奕芃2,陈一鸣2,郑海斌2,纪守领3,时杰4,程瑶4   

  1. 1(浙江工业大学网络空间安全研究院 杭州 310023);2(浙江工业大学信息工程学院 杭州 310023);3(浙江大学计算机科学与技术学院 杭州 310058);4(华为国际有限公司新加坡研究院 新加坡 138589) (chenjinyin@zjut.edu.cn)
  • 出版日期: 2021-02-01
  • 基金资助: 
    国家自然科学基金项目(62072406);浙江省自然科学基金项目(LY19F020025);宁波市“科技创新2025”重大专项(2018B10063)

Fairness Research on Deep Learning

Chen Jinyin1,2, Chen Yipeng2, Chen Yiming2, Zheng Haibin2, Ji Shouling3, Shi Jie4, Cheng Yao4   

  1. 1(Institute of Cyberspace Security, Zhejiang University of Technology, Hangzhou 310023);2(College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023);3(College of Computer Science and Technology, Zhejiang University, Hangzhou 310058);4(Huawei International Pte Ltd, Singapore 138589)
  • Online: 2021-02-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (62072406), the Natural Science Foundation of Zhejiang Province (LY19F020025), and the Major Special Funding for “Science and Technology Innovation 2025” in Ningbo (2018B10063).

摘要: 深度学习是机器学习研究中的一个重要领域,它具有强大的特征提取能力,且在许多应用中表现出先进的性能,因此在工业界中被广泛应用.然而,由于训练数据标注和模型设计存在偏见,现有的研究表明深度学习在某些应用中可能会强化人类的偏见和歧视,导致决策过程中的不公平现象产生,从而对个人和社会产生潜在的负面影响.为提高深度学习的应用可靠性、推动其在公平领域的发展,针对已有的研究工作,从数据和模型2方面出发,综述了深度学习应用中的偏见来源、针对不同类型偏见的去偏方法、评估去偏效果的公平性评价指标、以及目前主流的去偏平台,最后总结现有公平性研究领域存在的开放问题以及未来的发展趋势.

关键词: 深度学习, 算法公平性, 去偏方法, 公平性指标, 机器学习

Abstract: Deep learning is an important field of machine learning research, which is widely used in industry for its powerful feature extraction capabilities and advanced performance in many applications. However, due to the bias in training data labeling and model design, research shows that deep learning may aggravate human bias and discrimination in some applications, which results in unfairness during the decision-making process, thereby will cause negative impact to both individuals and socials. To improve the reliability of deep learning and promote its development in the field of fairness, we review the sources of bias in deep learning, debiasing methods for different types biases, fairness measure metrics for measuring the effect of debiasing, and current popular debiasing platforms, based on the existing research work. In the end we explore the open issues in existing fairness research field and future development trends.

Key words: deep learning, algorithm fairness, debiasing method, fairness metric, machine learning

中图分类号: