• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Fan Zhuoya, Meng Xiaofeng. Algorithmic Fairness and Fairness Computing[J]. Journal of Computer Research and Development, 2023, 60(9): 2048-2066. DOI: 10.7544/issn1000-1239.202220625
Citation: Fan Zhuoya, Meng Xiaofeng. Algorithmic Fairness and Fairness Computing[J]. Journal of Computer Research and Development, 2023, 60(9): 2048-2066. DOI: 10.7544/issn1000-1239.202220625

Algorithmic Fairness and Fairness Computing

Funds: This work was supported by the National Natural Science Foundation of China (91846204,62172423).
More Information
  • Author Bio:

    Fan Zhuoya: born in 1999. Master. Her main research interests include data mining, algorithmic fairness, and data privacy

    Meng Xiaofeng: born in 1964. PhD, professor, PhD supervisor. Fellow of CCF. His main research interests include cloud data management, Web data management, and privacy preserving

  • Received Date: July 09, 2022
  • Revised Date: April 20, 2023
  • Available Online: June 26, 2023
  • The problem of algorithmic fairness has a long history, and it has been constantly renovated with the process of social change. With the acceleration of digital transformation, the root cause of algorithmic fairness problem has gradually shifted from social bias to data bias and model bias. Meanwhile, algorithmic exploitation has become more hidden and far-reaching. Although various fields of social science have studied the problem of fairness for a long time, most of them only stay in qualitative expression. As an intersection of computer science and social science, algorithmic fairness under digital transformation should not only inherit the basic theories of various fields of social science, but also provide the methods and capabilities of fairness computing. Therefore, we start with the definition of algorithmic fairness, and summarize the existing algorithmic fairness computing methods from the three dimensions of social bias, data bias and model bias. Finally, we compare algorithmic fairness indicators and methods by experiments, and then analyze the challenges of algorithmic fairness computing. Our experiments show that there is a trade-off relationship between the fairness and accuracy of original models, and there is a consistent relationship between the fairness and accuracy of fairness methods. Regarding fairness indicators, there is a significant difference in the correlation between different fairness indicators, indicating the importance of diverse fairness indicators. Regarding fairness methods, a single fairness method has limited effect, indicating the importance of exploring combinations of fairness methods.

  • [1]
    Baines J, Lesko L H, Silverman D P. Religion in Ancient Egypt: Gods, Myths, and Personal Practice [M]. Ithaca, NY: Cornell University Press, 1991
    [2]
    Adams J S. Inequity in social exchange[J]. Advances in Experimental Social Psychology, 1965, 2: 267−299
    [3]
    Rawls J. A Theory of Justice [M]. Cambridge, MA: Harvard University Press, 1971
    [4]
    Ochigame R. Remodeling rationality: An inquiry into unorthodox modes of logic and computation [D]. Cambridge, MA: MIT Press, 2021
    [5]
    Hacking I. The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference [M]. Cambridge, UK: Cambridge University Press, 2006
    [6]
    McCurry J. South Korean AI chatbot pulled from Facebook after hate speech towards minorities [EB/OL]. London: The Guardian. (2021-01-14)[2022-12-03].https://www.theguardian.com/world/2021/jan/14/time-to-properly-socialise-hate-speech-ai-chatbot-pulled-from-facebook
    [7]
    Das S. It’s hysteria, not a heart attack, GP App Babylon tells women [EB/OL]. London: The Sunday Times. (2019-10-13)[2022-12-03].https://www.thetimes.co.uk/article/its-hysteria-not-a-heart-attack-gp-app-tells-women-gm2vxbrqk
    [8]
    Angwin J, Larson J, Mattu S, et al. Machine bias: There’s software used across the country to predict future criminals. And it’s biased against blacks [EB/OL]. New York: ProPublica. (2016-05-23)[2022-12-03].https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
    [9]
    全国人民代表大会常务委员会. 中华人民共和国个人信息保护法 [EB/OL]. 北京: 中国人大网. (2021-08-20)[2022-12-03]. http://www.npc.gov.cn/npc/c30834/202108/a8c4e3672c74491a80b53a172bb753fe.shtml

    National People’s Congress Standing Committee. Personal Information Protection Law of the People’s Republic of China [EB/OL]. Beijing: The Website of National People’s Congress of the People’s Republic of China. (2021-08-20)[2022-12-03]. http://www.npc.gov.cn/npc/c30834/202108/a8c4e3672c74491a80b53a172bb753fe.shtml (in Chinese)
    [10]
    国家互联网信息办公室. 互联网信息服务算法推荐管理规定 [EB/OL]. 北京: 中国网信网. (2022-01-04)[2022-12-03]. http://www.cac.gov.cn/2022−01/04/c_1642894606364259.htm

    Cyberspace Administration of China. Provisions on the Administration of Algorithm-generated Recommendations for Internet Information Services [EB/OL]. Beijing: The Website of Cyberspace Administration of China. (2022-01-04)[2022-12-03]. http://www.cac.gov.cn/2022−01/04/c_1642894606364259.htm (in Chinese)
    [11]
    Mehrabi N, Morstatter F, Saxena N, et al. A survey on bias and fairness in machine learning[J]. ACM Computing Surveys, 2021, 54(6): 1−35
    [12]
    陈晋音,陈奕芃,陈一鸣,等. 面向深度学习的公平性研究综述[J]. 计算机研究与发展,2021,58(2):264−280 doi: 10.7544/issn1000-1239.2021.20200758

    Chen Jinyin, Chen Yipeng, Chen Yiming, et al. Fairness research on deep learning[J]. Journal of Computer Research and Development, 2021, 58(2): 264−280 (in Chinese) doi: 10.7544/issn1000-1239.2021.20200758
    [13]
    刘文炎,沈楚云,王祥丰,等. 可信机器学习的公平性综述[J]. 软件学报,2021,32(5):1404−1426 doi: 10.13328/j.cnki.jos.006214

    Liu Wenyan, Shen Chuyun, Wang Xiangfeng, et al. Survey on fairness in trustworthy machine learning[J]. Journal of Software, 2021, 32(5): 1404−1426 (in Chinese) doi: 10.13328/j.cnki.jos.006214
    [14]
    古天龙,李龙,常亮,等. 公平机器学习:概念、分析与设计[J]. 计算机学报,2022,45(5):1018−1051 doi: 10.11897/SP.J.1016.2022.01018

    Gu Tianlong, Li Long, Chang Liang, et al. Fair machine learning: Concepts, analysis, and design[J]. Chinese Journal of Computers, 2022, 45(5): 1018−1051 (in Chinese) doi: 10.11897/SP.J.1016.2022.01018
    [15]
    Pessach D, Shmueli E. A review on fairness in machine learning[J]. ACM Computing Surveys, 2022, 55(3): 1−44
    [16]
    Pitoura E. Social-minded measures of data quality: Fairness, diversity, and lack of bias[J]. Journal of Data and Information Quality, 2020, 12(3): 1−8
    [17]
    全国人民代表大会. 中华人民共和国宪法 [EB/OL]. 北京: 中国人大网. (2018-03-22)[2022-12-03]. http://www.npc.gov.cn/npc/c505/201803/e87e5cd7c1ce46ef866f4ec8e2d709ea.shtml

    National People’s Congress. Constitution of the People’s Republic of China [EB/OL]. Beijing: The Website of National People’s Congress of the People’s Republic of China. (2018-03-22)[2022-12-03]. http://www.npc.gov.cn/npc/c505/201803/e87e5cd7c1ce46ef866f4ec8e2d709ea.shtml (in Chinese)
    [18]
    全国人民代表大会. 中华人民共和国宪法 [EB/OL]. 北京: 中国人大网. (2018-03-22)[2022-12-03]. http://www.npc.gov.cn/npc/c505/201803/e87e5cd7c1ce46ef866f4ec8e2d709ea.shtml

    The United States of Department of Justice. Fair Housing Act [EB/OL]. (2022-05-31)[2022-12-03].https://www.justice.gov/crt/fair-housing-act-1
    [19]
    The United States of Department of Justice. Equal Credit Opportunity Act [EB/OL]. (2015-08-06)[2022-12-03].https://www.justice.gov/crt/equal-credit-opportunity-act-1
    [20]
    The Federal Office of Justice in Bonn. Allgemeines Gleichbehandlungsgesetz [EB/OL]. (2013-04-03)[2022-12-03].https://www.gesetze-im-internet.de/agg/
    [21]
    Lippert-Rasmussen K. Born Free and Equal? A Philosophical Inquiry into the Nature of Discrimination [M]. Oxford, UK: Oxford University Press, 2013
    [22]
    Brams S J, Taylor A D. Fair Division: From Cake-Cutting to Dispute Resolution [M]. Cambridge, UK: Cambridge University Press, 1996
    [23]
    Robertson J, Webb W. Cake-Cutting Algorithms: Be Fair If You Can [M]. Boca Raton, FL: CRC Press, 1998
    [24]
    Brams S J, Taylor A D. An envy-free cake division protocol[J]. The American Mathematical Monthly, 1995, 102(1): 9−18 doi: 10.1080/00029890.1995.11990526
    [25]
    Aziz H, Mackenzie S. A discrete and bounded envy-free cake cutting protocol for any number of agents [C] //Proc of the 57th Annual Symp on Foundations of Computer Science (FOCS). Piscataway, NJ: IEEE, 2016: 416−427
    [26]
    Cohen M, Elmachtoub A N, Lei Xiao. Price discrimination with fairness constraints [C] //Proc of the 4th ACM Conf on Fairness, Accountability, and Transparency. New York: ACM, 2021: 2−2
    [27]
    Donahue K, Barocas S. Better together? How externalities of size complicate notions of solidarity and actuarial fairness [C] //Proc of the 4th ACM Conf on Fairness, Accountability, and Transparency. New York: ACM, 2021: 185−195
    [28]
    Merler M, Ratha N, Feris R S, et al. Diversity in faces [J]. arXiv preprint, arXiv: 1901.10436, 2019
    [29]
    Yang Kaiyu, Qinami K, Li Feifei, et al. Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the ImageNet hierarchy [C] //Proc of the 3rd Conf on Fairness, Accountability, and Transparency. New York: ACM, 2020: 547−558
    [30]
    Hunt E. Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter [EB/OL]. London: The Guardian. (2016-05-24)[2022-12-03].https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter
    [31]
    Bolukbasi T, Chang Kaiwei, Zou J Y, et al. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings [C] //Proc of the 30th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2016: 4349−4357
    [32]
    Dastin J. Amazon scraps secret AI recruiting tool that showed bias against women [EB/OL]. London: Reuters. (2018-10-11)[2022-12-03].https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
    [33]
    Font J E, Costa-jussà M R. Equalizing gender bias in neural machine translation with word embeddings techniques [C] //Proc of the 1st Workshop on Gender Bias in Natural Language Processing. Stroudsburg, PA: ACL, 2019: 147−154
    [34]
    Simonite T. When it comes to gorillas, Google photos remains blind [EB/OL]. San Francisco, CA: Wired. (2018-01-11)[2022-12-03].https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/
    [35]
    Mac R. Facebook apologizes after A. I. puts ‘primates’ label on video of black men [EB/OL]. New York: The New York Times. (2021-09-03)[2022-12-03].https://www.nytimes.com/2021/09/03/technology/facebook-ai-race-primates.html
    [36]
    Friedman B, Nissenbaum H. Bias in computer systems[J]. ACM Transaction on Information Systems, 1996, 14(3): 330−347 doi: 10.1145/230538.230561
    [37]
    Amini A, Soleimany A P, Schwarting W, et al. Uncovering and mitigating algorithmic bias through learned latent structure [C] //Proc of the 2nd AAAI/ACM Conf on AI, Ethics, and Society. New York: ACM, 2019: 289−295
    [38]
    Kamiran F, Calders T. Data preprocessing techniques for classification without discrimination[J]. Knowledge and Information Systems, 2012, 33(1): 1−33 doi: 10.1007/s10115-011-0463-8
    [39]
    Chawla N V, Bowyer K W, Hall L O, et al. SMOTE: Synthetic minority over-sampling technique[J]. Journal of Artificial Intelligence Research, 2002, 16(1): 321−357
    [40]
    Zhao Jieyu, Wang Tianlu, Yatskar M, et al. Gender bias in coreference resolution: Evaluation and debiasing methods [C] //Proc of the 16th Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). Stroudsburg, PA: ACL, 2018: 15−20
    [41]
    McNamara D, Ong C S, Williamson R C. Costs and benefits of fair representation learning [C] //Proc of the 2nd AAAI/ACM Conf on AI, Ethics, and Society. New York: ACM, 2019: 263−270
    [42]
    Zemel R, Wu Yu, Swersky K, et al. Learning fair representations [C] //Proc of the 30th Int Conf on Machine Learning. Cambridge, MA: MIT, 2013: 325−333
    [43]
    Madras D, Creager E, Pitassi T, et al. Learning adversarially fair and transferable representations [C] //Proc of the 35th Int Conf on Machine Learning. Cambridge, MA: MIT, 2018: 3384−3393
    [44]
    Alvi M, Zisserman A, Nellåker C. Turning a blind eye: Explicit removal of biases and variation from deep neural network embeddings [G] // LNCS 11129: Proc of the 15th European Conf on Computer Vision Workshops. Berlin: Springer, 2019: 556−572
    [45]
    Bose A, Hamilton W. Compositional fairness constraints for graph embeddings [C] //Proc of the 36th Int Conf on Machine Learning. Cambridge, MA: MIT, 2019: 715−724
    [46]
    Zhang Hongjing, Davidson I. Towards fair deep anomaly detection [C] //Proc of the 4th ACM Conf on Fairness, Accountability, and Transparency. New York: ACM, 2021: 138−148
    [47]
    Sweeney C, Najafian M. Reducing sentiment polarity for demographic attributes in word embeddings using adversarial learning [C] //Proc of the 3rd Conf on Fairness, Accountability, and Transparency. New York: ACM, 2020: 359−368
    [48]
    Ball-Burack A, Lee M S A, Cobbe J, et al. Differential tweetment: Mitigating racial dialect bias in harmful tweet detection [C] //Proc of the 4th ACM Conf on Fairness, Accountability, and Transparency. New York: ACM, 2021: 116−128
    [49]
    Shen Aili, Han Xudong, Cohn T, et al. Contrastive learning for fair representations [J]. arXiv preprint, arXiv: 2109.10645, 2021
    [50]
    Hong Y, Yang E. Unbiased classification through bias-contrastive and bias-balanced learning [C] //Proc of the 35th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2021: 26449−26461
    [51]
    Baeza-Yates R. Bias on the web[J]. Communications of the ACM, 2018, 61(6): 54−61 doi: 10.1145/3209581
    [52]
    Kilby A E. Algorithmic fairness in predicting opioid use disorder using machine learning [C] //Proc of the 4th ACM Conf on Fairness, Accountability, and Transparency. New York: ACM, 2021: 272−272
    [53]
    Obermeyer Z, Powers B, Vogeli C, et al. Dissecting racial bias in an algorithm used to manage the health of populations[J]. Science, 2019, 366(6464): 447−453 doi: 10.1126/science.aax2342
    [54]
    Li Tian, Sanjabi M, Beirami A, et al. Fair resource allocation in federated learning [J]. arXiv preprint, arXiv: 1905.10497, 2019
    [55]
    Dwork C, Hardt M, Pitassi T, et al. Fairness through awareness [C] //Proc of the 3rd Innovations in Theoretical Computer Science Conf. New York: ACM, 2012: 214−226
    [56]
    Hardt M, Price E, Srebro N. Equality of opportunity in supervised learning [C] //Proc of the 30th Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2016: 3315−3323
    [57]
    Chouldechova A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments[J]. Big Data, 2017, 5(2): 153−163 doi: 10.1089/big.2016.0047
    [58]
    Narayanan A. Translation tutorial: 21 fairness definitions and their politics [C] //Proc of the 1st Conf on Fairness, Accountability, and Transparency. Cambridge, MA: MIT, 2018: 3−3
    [59]
    Kusner M J, Loftus J R, Russell C, et al. Counterfactual fairness [C] //Proc of the 31st Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2017: 4069−4079
    [60]
    Speicher T, Heidari H, Grgic-Hlaca N, et al. A unified approach to quantifying algorithmic unfairness: Measuring individual &group unfairness via inequality indices [C] //Proc of the 24th ACM SIGKDD Int Conf on Knowledge Discovery & Data Mining. New York: ACM, 2018: 2239−2248
    [61]
    Foulds J R, Islam R, Keya K N, et al. An intersectional definition of fairness [C] //Proc of the 36th IEEE Int Conf on Data Engineering (ICDE). Piscataway, NJ: IEEE, 2020: 1918−1921
    [62]
    Chierichetti F, Kumar R, Lattanzi S, et al. Fair clustering through fairlets [C] //Proc of the 31st Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2017: 5036−5044
    [63]
    Backurs A, Indyk P, Onak K, et al. Scalable fair clustering [C] //Proc of the 36th Int Conf on Machine Learning. Cambridge, MA: MIT, 2019: 405−413
    [64]
    Chen Xingyu, Fain B, Lyu Liang, et al. Proportionally fair clustering [C] //Proc of the 36th Int Conf on Machine Learning. Cambridge, MA: MIT, 2019: 1032−1041
    [65]
    Ghadiri M, Samadi S, Vempala S. Socially fair k-means clustering [C] //Proc of the 4th ACM Conf on Fairness, Accountability, and Transparency. New York: ACM, 2021: 438−448
    [66]
    Kamishima T, Akaho S, Asoh H, et al. Fairness-aware classifier with prejudice remover regularizer [G] // INAI 7524: Proc of the 16th Joint European Conf on Machine Learning and Knowledge Discovery in Databases. Berlin: Springer, 2012: 35−50
    [67]
    Wang Zeyu, Qinami K, Karakozis I C, et al. Towards fairness in visual recognition: Effective strategies for bias mitigation [C] //Proc of the 33rd IEEE/CVF Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 8919−8928
    [68]
    Sagawa S, Koh P W, Hashimoto T B, et al. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization [J]. arXiv preprint, arXiv: 1911.08731, 2019
    [69]
    Qraitem M, Saenko K, Plummer B A. Bias mimicking: A simple sampling approach for bias mitigation[J]. arXiv preprint, arXiv: 2209.15605, 2022
    [70]
    Chen I, Johansson F D, Sontag D. Why is my classifier discriminatory? [C] //Proc of the 32nd Int Conf on Neural Information Processing Systems. Red Hook, NY: Curran Associates Inc, 2018: 3543–3554
    [71]
    Karimi A H, Schölkopf B, Valera I. Algorithmic recourse: From counterfactual explanations to interventions [C] //Proc of the 4th ACM Conf on Fairness, Accountability, and Transparency. New York: ACM, 2021: 353−362
    [72]
    Kleinberg J, Mullainathan S, Raghavan M. Inherent trade-offs in the fair determination of risk scores [C/OL] //Proc of the 8th Innovations in Theoretical Computer Science Conf. 2017 [2022-12-23]. http://drops.dagstuhl.de/opus/volltexte/2017/8156
    [73]
    Pujol D, McKenna R, Kuppam S, et al. Fair decision making using privacy-protected data [C] //Proc of the 3rd Conf on Fairness, Accountability, and Transparency. New York: ACM, 2020: 189−199
    [74]
    Suriyakumar V M, Papernot N, Goldenberg A, et al. Chasing your long tails: Differentially private prediction in health care settings [C] //Proc of the 4th ACM Conf on Fairness, Accountability, and Transparency. New York: ACM, 2021: 723−734
    [75]
    Tian Huan, Zhu Tianqing, Liu Wei, et al. Image fairness in deep learning: Problems, models, and challenges[J]. Neural Computing and Applications, 2022, 34(15): 12875−12893 doi: 10.1007/s00521-022-07136-1
    [76]
    Mehrotra A, Celis L E. Mitigating bias in set selection with noisy protected attributes [C] //Proc of the 4th ACM Conf on Fairness, Accountability, and Transparency. New York: ACM, 2021: 237−248
    [77]
    Wang Jialu, Liu Yang, Levy C. Fair classification with group-dependent label noise [C] //Proc of the 4th ACM Conf on Fairness, Accountability, and Transparency. New York: ACM, 2021: 526−536
    [78]
    Khani F, Liang P. Removing spurious features can hurt accuracy and affect groups disproportionately [C] //Proc of the 4th ACM Conf on Fairness, Accountability, and Transparency. New York: ACM, 2021: 196−205
    [79]
    Albert K, Delano M. This whole thing smacks of gender: Algorithmic exclusion in bioimpedance-based body composition analysis [C] //Proc of the 4th ACM Conf on Fairness, Accountability, and Transparency. New York: ACM, 2021: 342−352
    [80]
    Bellamy R K E, Dey K, Hind M, et al. AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias [J]. IBM Journal of Research and Development, 2019, 63(4/5): 4: 1−4: 15
    [81]
    Bird S, Dudík M, Edgar R, et al. Fairlearn: A toolkit for assessing and improving fairness in AI [EB/OL]. (2020-09-22)[2022-12-03].https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai/
    [82]
    D’Amour A, Srinivasan H, Atwood J, et al. Fairness is not static: Deeper understanding of long term fairness via simulation studies [C] //Proc of the 3rd Conf on Fairness, Accountability, and Transparency. New York: ACM, 2020: 525−534
  • Related Articles

    [1]Wang Huiju, Huang Weixuan, Yue Xiao. Novel Practical Query Pricing Algorithm Based on Labor Game Model[J]. Journal of Computer Research and Development, 2024, 61(12): 3154-3167. DOI: 10.7544/issn1000-1239.202330791
    [2]Wang Zhen, Fan Hongjie, Liu Junfei. An Alleviate Exposure Bias Method in Joint Extraction of Entities and Relations[J]. Journal of Computer Research and Development, 2022, 59(9): 1980-1992. DOI: 10.7544/issn1000-1239.20210078
    [3]Wang Dong, Li Zhenyu, Xie Gaogang. Unbiased Sampling Technologies on Online Social Network[J]. Journal of Computer Research and Development, 2016, 53(5): 949-967. DOI: 10.7544/issn1000-1239.2016.20148387
    [4]Yang Tan, Feng Xiang, Yu Huiqun. Feature Selection Algorithm Based on the Multi-Colony Fairness Model[J]. Journal of Computer Research and Development, 2015, 52(8): 1742-1756. DOI: 10.7544/issn1000-1239.2015.20150245
    [5]Gong Jibing, Wang Rui, Wang Xiaofeng, Cui Li. Health Status Detection via Temporal-Spatial Factor Graph Model in Medical Social Networks[J]. Journal of Computer Research and Development, 2013, 50(6): 1285-1296.
    [6]Wang Peng, Meng Dan, Zhan Jianfeng, Tu Bibo. Review of Programming Models for Data-Intensive Computing[J]. Journal of Computer Research and Development, 2010, 47(11): 1993-2002.
    [7]Liu Zhenglin, Han Yu, Zou Xuecheng, and ChenYicheng. Power Analysis Attacks Against AES Based on Maximal Bias Signal[J]. Journal of Computer Research and Development, 2009, 46(3): 370-376.
    [8]Lin Fen, Shi Chuan, Luo Jiewen, Shi Zhongzhi. Dual Reinforcement Learning Based on Bias Learning[J]. Journal of Computer Research and Development, 2008, 45(9): 1455-1462.
    [9]Lin Jianning, Wu Huizhong. Research on a Trust Model Based on the Subjective Logic Theory[J]. Journal of Computer Research and Development, 2007, 44(8): 1365-1370.
    [10]Ma Liang, Chen Qunxiu, and Cai Lianhong. An Improved Model for Adaptive Text Information Filtering[J]. Journal of Computer Research and Development, 2005, 42(1): 79-84.
  • Cited by

    Periodical cited type(1)

    1. 赵宇超,周旅军. 人工智能性别歧视的伦理治理——基于AI从业者技术性别价值观的实证调查. 西南民族大学学报(人文社会科学版). 2024(10): 155-163 .

    Other cited types(0)

Catalog

    Article views (306) PDF downloads (132) Cited by(1)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return