高级检索

    基于对比学习的多兴趣感知序列推荐系统

    Multi-Interest Aware Sequential Recommender System Based on Contrastive Learning

    • 摘要: 序列推荐的近几年工作通过聚类历史交互物品或者利用图卷积神经网络获取交互的多层次关联信息来细化用户兴趣. 然而,这些方法没有考虑具有相似行为模式的用户之间的相互影响以及交互序列中时间间隔不均匀对用户兴趣的影响. 基于上述问题,提出一种基于对比学习的多兴趣感知序列推荐模型MIRec,一方面考虑了序列内部的物品依赖和位置依赖等局部偏好信息,另一方面通过图信息聚合机制获取相似用户之间的全局偏好信息;然后将融合局部偏好和全局偏好的用户表示输入胶囊网络中,学习用户交互序列中的多兴趣表示;最后通过对比学习使用户的历史交互序列靠近增强的交互序列,获得对时间间隔不敏感的用户多兴趣表示,为用户提供更准确的推荐. 所提模型在2个真实数据集上进行了充分实验,实验结果验证了所提模型的有效性.

       

      Abstract: Recent advancements in the field of sequential recommender have focused on refining user interests through various methods, such as clustering historical interactions or utilizing graph convolutional neural networks to capture multi-level correlations among interactions. However, while these approaches have significantly advanced the field, they often overlook the interactions between users with similar behavioral patterns and the impact of irregular time intervals within interaction sequences on user interests. Based on the above problems, a multi-interest aware sequential recommender model (MIRec) based on contrastive learning is proposed. This model takes into account both local preference information, including item dependence and location dependence within a sequence, and global preference information obtained through a graph information aggregation mechanism among similar users. The user representations, which incorporate both local and global preferences, are fed into a capsule network to learn multi-interest representations within the user interaction sequence. Subsequently, the user’s historical interaction sequences are brought closer to enhanced interaction sequences through contrastive learning. This process results in the generation of a user’s multi-interest representation that is insensitive to time intervals, ultimately leading to more accurate recommendations for users. The effectiveness of this model is verified on two real datasets, and the experimental results verify the effectiveness of the model.

       

    /

    返回文章
    返回