高级检索

    基于大型语言模型的可信多模态推荐算法

    Large Language Model-based Trusted Multi-modal Recommendation

    • 摘要: 序列推荐的核心在于从用户的交互序列中挖掘其偏好和行为模式。现有研究已经认识到单一模态交互数据存在不足,因此借助大量多模态数据(如商品评价、主页图片等)来丰富交互信息,提升推荐系统的性能。然而,这些多模态数据中常常夹杂着不可避免的噪音,可能会限制用户个性化偏好的探索。尽管可以通过抑制模态间不一致的信息来减少噪声干扰,但要完全消除用户生成的多模态内容中的噪音几乎是不可能的。针对上述挑战,我们提出了一种基于大型语言模型的可信多模态推荐算法,旨在在含噪多模态数据场景下提供可信的推荐结果。该算法依托于大型语言模型卓越的自然语言理解能力,高效过滤多模态数据中的噪音,实现对用户偏好更为精确和细致的建模。此外,我们还设计了一种可信决策机制,用于动态评估推荐结果的不确定性,以确保在高风险场景下推荐结果的可用性。在四个广泛使用的公开数据集上的实验结果显示,相较于其他基线算法,本文提出的算法有更好的性能表现。我们的代码可以在https://github.com/hhbray/Large-TR获取。

       

      Abstract: Sequential recommendation is centered on mining users' preferences and behavior patterns from their interaction sequences. Existing works have recognized the inadequacy of single-modal interaction data, and have utilized a large amount of multimodal data, including item reviews, homepage images, and other sources, to complement interaction data and improve recommendation performance. However, these multimodal data are often interspersed with unavoidable noise that may limit the exploration of personalized user preferences. While suppressing inter-modal inconsistent information can reduce noise interference, it is almost impossible to completely eliminate noise from user-generated multimodal content. To address the above challenges, we propose a Large language model-based Trusted multi-modal Recommendation (Large-TR) algorithm, which aims to provide trustworthy recommendation in noisy multi-modal data scenarios. Specifically, the algorithm relies on the excellent natural language understanding capability of the large language model, which efficiently filters the noise in multimodal data and achieves more accurate and detailed modelling of user preferences. Additionally, we design a trustworthy decision mechanism that dynamically evaluates the uncertainty of recommendation results and ensures the usability of recommendation results in high-risk scenarios. Experimental results on four widely used public datasets show that the algorithm proposed in this paper has better performance compared to other baseline algorithms. Our source code is available at https://github.com/ hhbray/Large-TR.

       

    /

    返回文章
    返回