• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Qi Faxin, Tong Xiangrong, Yu Lei. Agent Trust Boost via Reinforcement Learning DQN[J]. Journal of Computer Research and Development, 2020, 57(6): 1227-1238. DOI: 10.7544/issn1000-1239.2020.20190403
Citation: Qi Faxin, Tong Xiangrong, Yu Lei. Agent Trust Boost via Reinforcement Learning DQN[J]. Journal of Computer Research and Development, 2020, 57(6): 1227-1238. DOI: 10.7544/issn1000-1239.2020.20190403

Agent Trust Boost via Reinforcement Learning DQN

Funds: This work was supported by the National Natural Science Foundation of China (61572418).
More Information
  • Published Date: May 31, 2020
  • Trust recommendation system is an important application of recommendation system based on social network. It combines the trust relationship between users to recommend items to users. However, previous studies generally assume that the trust value between users is fixed, so it is unable to respond to the dynamic changes of user trust and preferences in a timely manner, thus affecting the recommendation effect. In fact, after receiving the recommendation, there is a difference between actual evaluation and expected evaluation which is correlated with trust value. The user’s trust in the recommender will increase when the actual evaluation is higher than expected evaluation, and vice versa. Based on the dynamics of trust and the changing process of trust between users, this paper proposes a trust boost method through reinforcement learning. Least mean square algorithm is used to learn the dynamic impact of evaluation difference on user’s trust. In addition, a reinforcement learning method deep q-learning (DQN) is studied to simulate the process of learning user’s preferences and boosting trust value. Finally, a polynomial level algorithm is proposed to calculate the trust value and recommendation, which can motivate the recommender to learn the user’s preference and keep the user’s trust in the recommender at a high level. Experiments indicate that our method applied to recommendation systems could respond to the changes quickly on user’s preferences. Compared with other methods, our method has better accuracy on recommendation.
  • Related Articles

    [1]Gu Tianlong, Gao Hui, Li Long, Bao Xuguang, Li Yunhui. An Approach for Training Moral Agents via Reinforcement Learning[J]. Journal of Computer Research and Development, 2022, 59(9): 2039-2050. DOI: 10.7544/issn1000-1239.20210474
    [2]Yu Xian, Li Zhenyu, Sun Sheng, Zhang Guangxing, Diao Zulong, Xie Gaogang. Adaptive Virtual Machine Consolidation Method Based on Deep Reinforcement Learning[J]. Journal of Computer Research and Development, 2021, 58(12): 2783-2797. DOI: 10.7544/issn1000-1239.2021.20200366
    [3]Wu Jinjin, Liu Quan, Chen Song, Yan Yan. Averaged Weighted Double Deep Q-Network[J]. Journal of Computer Research and Development, 2020, 57(3): 576-589. DOI: 10.7544/issn1000-1239.2020.20190159
    [4]Feng Wei, Hang Wenlong, Liang Shuang, Liu Xuejun, Wang Hui. Deep Stack Least Square Classifier with Inter-Layer Model Knowledge Transfer[J]. Journal of Computer Research and Development, 2019, 56(12): 2589-2599. DOI: 10.7544/issn1000-1239.2019.20180741
    [5]Bai Chenjia, Liu Peng, Zhao Wei, Tang Xianglong. Active Sampling for Deep Q-Learning Based on TD-error Adaptive Correction[J]. Journal of Computer Research and Development, 2019, 56(2): 262-280. DOI: 10.7544/issn1000-1239.2019.20170812
    [6]Zhang Kaifeng, Yu Yang. Methodologies for Imitation Learning via Inverse Reinforcement Learning: A Review[J]. Journal of Computer Research and Development, 2019, 56(2): 254-261. DOI: 10.7544/issn1000-1239.2019.20170578
    [7]Zhu Fei, Wu Wen, Liu Quan, Fu Yuchen. A Deep Q-Network Method Based on Upper Confidence Bound Experience Sampling[J]. Journal of Computer Research and Development, 2018, 55(8): 1694-1705. DOI: 10.7544/issn1000-1239.2018.20180148
    [8]Zhu Fei, Liu Quan, Fu Qiming, Fu Yuchen. A Least Square Actor-Critic Approach for Continuous Action Space[J]. Journal of Computer Research and Development, 2014, 51(3): 548-558.
    [9]Shi Chuan, Shi Zhongzhi, Wang Maoguang. Online Hierarchical Reinforcement Learning Based on Path-matching[J]. Journal of Computer Research and Development, 2008, 45(9).
    [10]Chen Zonghai, Wen Feng, Nie Jianbin, and Wu Xiaoshu. A Reinforcement Learning Method Based on Node-Growing k-Means Clustering Algorithm[J]. Journal of Computer Research and Development, 2006, 43(4): 661-666.
  • Cited by

    Periodical cited type(15)

    1. 王靖,方旭明. Wi-Fi7多链路通感一体化的功率和信道联合智能分配算法. 计算机应用. 2025(02): 563-570 .
    2. 葛振兴,向帅,田品卓,高阳. 基于深度强化学习的掼蛋扑克博弈求解. 计算机研究与发展. 2024(01): 145-155 . 本站查看
    3. Xiaodong Zhuang,Xiangrong Tong. A dynamic algorithm for trust inference based on double DQN in the internet of things. Digital Communications and Networks. 2024(04): 1024-1034 .
    4. 李迎港,童向荣. 基于知识引导的自适应序列强化学习模型. 模式识别与人工智能. 2023(02): 108-119 .
    5. 冯景瑜,张静,时翌飞. 物联网中具备终端匿名的加密流量双层过滤方法. 西安邮电大学学报. 2023(02): 72-81 .
    6. 冯景瑜,李嘉伦,张宝军,韩刚,张文波. 工业互联网中抗APT窃密的主动式零信任模型. 西安电子科技大学学报. 2023(04): 76-88 .
    7. 丁世飞,杜威,郭丽丽,张健,徐晓. 基于双评论家的多智能体深度确定性策略梯度方法. 计算机研究与发展. 2023(10): 2394-2404 . 本站查看
    8. 冯景瑜,王锦康,张宝军,刘宇航. 基于信任过滤的轻量级加密流量异常检测方案. 西安邮电大学学报. 2023(05): 56-66 .
    9. 徐敏,胡聪,王萍,张翠翠,王鹏. 基于强化学习的Ceph文件系统的性能优化. 微型电脑应用. 2022(03): 83-86 .
    10. 冯景瑜,于婷婷,王梓莹,张文波,韩刚,黄文华. 电力物联场景下抗失陷终端威胁的边缘零信任模型. 计算机研究与发展. 2022(05): 1120-1132 . 本站查看
    11. 王鑫,赵清杰,于重重,张长春,陈涌泉. 多节点探测器软着陆的路径规划方法. 宇航学报. 2022(03): 366-373 .
    12. 张文璐,霍子龙,赵西雨,崔琪楣,陶小峰. 面向智能工厂多机器人定位的无线分布式协同决策. 无线电通信技术. 2022(04): 718-727 .
    13. 王岩,童向荣. 基于tri-training和极限学习机的跨领域信任预测. 计算机研究与发展. 2022(09): 2015-2026 . 本站查看
    14. 聂雷,刘博,李鹏,何亨. 基于多智能体Q学习的异构车载网络选择方法. 计算机工程与科学. 2021(05): 836-844 .
    15. 洪志理,赖俊,曹雷,陈希亮. 融合用户兴趣建模的智能推荐算法研究. 信息技术与网络安全. 2021(11): 37-48 .

    Other cited types(15)

Catalog

    Article views (1167) PDF downloads (405) Cited by(30)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return