高级检索

    GraphMLP-Mixer:基于图-多层感知机架构的高效多行为序列推荐方法

    GraphMLP-Mixer: A Graph-MLP Architecture for Efficient Multi-Behavior Sequential Recommendation Method

    • 摘要: 在多行为序列推荐领域,图神经网络(GNNs)虽被广泛应用,但存在局限性,如对序列间协同信号建模不足和处理长距离依赖性等问题. 针对这些问题,提出了一种新的解决框架GraphMLP-Mixer.该框架首先构造全局物品图来增强模型对序列间协同信号的建模,然后将感知机-混合器架构与图神经网络结合,得到图-感知机混合器模型对用户兴趣进行充分挖掘. GraphMLP-Mixer具有2个显著优势:一是能够有效捕捉用户行为的全局依赖性,同时减轻信息过压缩问题;二是其时间与空间效率显著提高,其复杂度与用户交互行为的数量成线性关系,优于现有基于GNN多行为序列推荐模型. 在3个真实的公开数据集上进行实验,大量的实验结果验证了GraphMLP-Mixer在处理多行为序列推荐问题时的有效性和高效性.

       

      Abstract: In the domain of multi-behavior sequence recommendation, Graph Neural Networks (GNNs) have been widespreadly adopted, yet they have limitations, notably in terms of adequately modeling the collaborative signals that exist between different sequences and addressing the challenges posed by long-distance dependencies. To bridge these gaps, a novel framework named GraphMLP-Mixer has been introduced. This innovative framework begins by constructing a global item graph, which is designed to bolster the model’s capacity to encapsulate the collaborative signals that are present across sequences. It then merges the perceptron-mixer architecture with graph neural networks, resulting in a graph-perceptron mixer model capable of delving deep into the intricacies of user interests. GraphMLP-Mixer stands out for its two principal strengths: It not only succeeds in effectively capturing the global dependencies inherent in user behaviors but also manages to alleviate the issue of excessive information compression. Furthermore, the framework boasts remarkable improvements in terms of time and space efficiency, with its complexity scaling in a linear fashion with the number of user interactions, thus outperforming existing GNN-based models in the realm of multi-behavior sequence recommendation. The robustness and efficiency of GraphMLP-Mixer in tackling the complexities of multi-behavior sequence recommendation have been thoroughly validated through extensive experimentation on three diverse and publicly available datasets.

       

    /

    返回文章
    返回