Agent Trust Boost via Reinforcement Learning DQN
-
摘要: 信任推荐系统是以社交网络为基础的一种重要推荐系统应用,其结合用户之间的信任关系对用户进行项目推荐.但之前的研究一般假定用户之间的信任值固定,无法对用户信任及偏好的动态变化做出及时响应,进而影响推荐效果.实际上,用户接受推荐后,当实际评价高于心理预期时,体验用户对推荐者的信任将增加,反之则下降.针对此问题,并且重点考虑用户间信任变化过程及信任的动态性,提出了一种结合强化学习的用户信任增强方法.因此,使用最小均方误差算法研究评价差值对用户信任的动态影响,利用强化学习方法deep q-learning(DQN)模拟推荐者在推荐过程中学习用户偏好进而提升信任值的过程,并且提出了一个多项式级别的算法来计算信任值和推荐,可激励推荐者学习用户的偏好,并使用户对推荐者的信任始终保持在较高程度.实验表明,方法可快速响应用户偏好的动态变化,当其应用于推荐系统时,相较于其他方法,可为用户提供更及时、更准确的推荐结果.Abstract: Trust recommendation system is an important application of recommendation system based on social network. It combines the trust relationship between users to recommend items to users. However, previous studies generally assume that the trust value between users is fixed, so it is unable to respond to the dynamic changes of user trust and preferences in a timely manner, thus affecting the recommendation effect. In fact, after receiving the recommendation, there is a difference between actual evaluation and expected evaluation which is correlated with trust value. The user’s trust in the recommender will increase when the actual evaluation is higher than expected evaluation, and vice versa. Based on the dynamics of trust and the changing process of trust between users, this paper proposes a trust boost method through reinforcement learning. Least mean square algorithm is used to learn the dynamic impact of evaluation difference on user’s trust. In addition, a reinforcement learning method deep q-learning (DQN) is studied to simulate the process of learning user’s preferences and boosting trust value. Finally, a polynomial level algorithm is proposed to calculate the trust value and recommendation, which can motivate the recommender to learn the user’s preference and keep the user’s trust in the recommender at a high level. Experiments indicate that our method applied to recommendation systems could respond to the changes quickly on user’s preferences. Compared with other methods, our method has better accuracy on recommendation.
-
-
期刊类型引用(15)
1. 王靖,方旭明. Wi-Fi7多链路通感一体化的功率和信道联合智能分配算法. 计算机应用. 2025(02): 563-570 . 百度学术
2. 葛振兴,向帅,田品卓,高阳. 基于深度强化学习的掼蛋扑克博弈求解. 计算机研究与发展. 2024(01): 145-155 . 本站查看
3. Xiaodong Zhuang,Xiangrong Tong. A dynamic algorithm for trust inference based on double DQN in the internet of things. Digital Communications and Networks. 2024(04): 1024-1034 . 必应学术
4. 李迎港,童向荣. 基于知识引导的自适应序列强化学习模型. 模式识别与人工智能. 2023(02): 108-119 . 百度学术
5. 冯景瑜,张静,时翌飞. 物联网中具备终端匿名的加密流量双层过滤方法. 西安邮电大学学报. 2023(02): 72-81 . 百度学术
6. 冯景瑜,李嘉伦,张宝军,韩刚,张文波. 工业互联网中抗APT窃密的主动式零信任模型. 西安电子科技大学学报. 2023(04): 76-88 . 百度学术
7. 丁世飞,杜威,郭丽丽,张健,徐晓. 基于双评论家的多智能体深度确定性策略梯度方法. 计算机研究与发展. 2023(10): 2394-2404 . 本站查看
8. 冯景瑜,王锦康,张宝军,刘宇航. 基于信任过滤的轻量级加密流量异常检测方案. 西安邮电大学学报. 2023(05): 56-66 . 百度学术
9. 徐敏,胡聪,王萍,张翠翠,王鹏. 基于强化学习的Ceph文件系统的性能优化. 微型电脑应用. 2022(03): 83-86 . 百度学术
10. 冯景瑜,于婷婷,王梓莹,张文波,韩刚,黄文华. 电力物联场景下抗失陷终端威胁的边缘零信任模型. 计算机研究与发展. 2022(05): 1120-1132 . 本站查看
11. 王鑫,赵清杰,于重重,张长春,陈涌泉. 多节点探测器软着陆的路径规划方法. 宇航学报. 2022(03): 366-373 . 百度学术
12. 张文璐,霍子龙,赵西雨,崔琪楣,陶小峰. 面向智能工厂多机器人定位的无线分布式协同决策. 无线电通信技术. 2022(04): 718-727 . 百度学术
13. 王岩,童向荣. 基于tri-training和极限学习机的跨领域信任预测. 计算机研究与发展. 2022(09): 2015-2026 . 本站查看
14. 聂雷,刘博,李鹏,何亨. 基于多智能体Q学习的异构车载网络选择方法. 计算机工程与科学. 2021(05): 836-844 . 百度学术
15. 洪志理,赖俊,曹雷,陈希亮. 融合用户兴趣建模的智能推荐算法研究. 信息技术与网络安全. 2021(11): 37-48 . 百度学术
其他类型引用(15)
计量
- 文章访问数:
- HTML全文浏览量: 0
- PDF下载量:
- 被引次数: 30