ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2022, Vol. 59 ›› Issue (9): 2039-2050.doi: 10.7544/issn1000-1239.20210474

• 人工智能 • 上一篇    下一篇



  1. 1(暨南大学网络空间安全学院 广州 510632);2(广西可信软件重点实验室(桂林电子科技大学) 广西桂林 541004) (
  • 出版日期: 2022-09-01
  • 基金资助: 

An Approach for Training Moral Agents via Reinforcement Learning

Gu Tianlong1,2, Gao Hui2, Li Long1,2, Bao Xuguang2, Li Yunhui2   

  1. 1(College of Cyber Security, Jinan University, Guangzhou 510632);2(Guangxi Key Laboratory of Trusted Software (Guilin University of Electronic Technology), Guilin, Guangxi 541004)
  • Online: 2022-09-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (62172350, 61966009, 61961007, 61862016, 62006057), Guangxi Natural Science Foundation (2019GXNSFBA245049, 2019GXNSFBA245059), and the Fundamental Research Funds for the Central Universities (21621028).

摘要: 自动驾驶汽车、看护机器人等形式多样的智能体在人类生活中扮演着越来越重要的角色,其伦理问题受到了广泛关注.为使智能体具备遵守人类伦理规范的能力,提出了一种基于众包和强化学习的伦理智能体训练方法.首先,采用众包获取行为示例数据集,并借助于文本聚类、关联分析等技术生成情节图及轨迹树,以定义智能体的基本行为空间、表明行为的发生顺序;其次,提出元伦理行为的概念,通过对不同场景中的相似行为进行概括,扩展伦理智能体的行为空间,进一步基于《中学生日常行为规范》提取了9种元伦理行为;最后,提出了行为分级机制及与之对应的强化学习奖惩函数,以此为基础完成伦理智能体训练.通过模拟人类生活中的买药场景,分别使用Q-learning算法及DQN(deep Q-networks)算法完成了伦理智能体的训练实验.实验结果表明,训练后的智能体能够以符合伦理的行为方式完成预期任务,验证了所提方法的合理性与有效性.

关键词: 伦理智能体, 符合伦理的设计, 伦理分级, 强化学习, 众包

Abstract: Artificial agents such as autonomous vehicles and healthcare robots are playing an increasingly important role in human life, and their moral issues have attracted more and more concerns. To build the ability for agents to comply with basic human ethical norms, a novel approach for training artificial moral agents is proposed based on crowdsourcing and reinforcement learning. Firstly, crowdsourcing is used to obtain sampling data sets of human behaviors, and text clustering and association analysis are used to generate plot graphs and trajectory trees, which define a basic behavior space of agents and present the sequence of behaviors. Secondly, the concept of meta-ethical behavior is proposed, which expands the behavior space of agents by summarizing similar behaviors in different scenarios, and nine kinds of meta-ethical behaviors are extracted from the Code of Daily Behavior of Middle School Students. Finally, a behavior grading mechanism and the corresponding reward and punishment function in reinforcement learning are proposed. By simulating drug purchase scenarios in human life, Q-learning algorithm and DQN (deep Q-networks) algorithm are used to complete the training experiments of moral agent respectively. Experimental results show that the trained agents can complete the expected tasks in ethical manners, which verifies the rationality and effectiveness of the above method.

Key words: moral agent, ethically aligned design, ethical grading, reinforcement learning, crowdsourcing