Advanced Search
    Qian Zhongsheng, Huang Heng, Zhu Hui, Liu Jinping. Multi-Perspective Graph Contrastive Learning Recommendation Method with Layer Attention Mechanism[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202330804
    Citation: Qian Zhongsheng, Huang Heng, Zhu Hui, Liu Jinping. Multi-Perspective Graph Contrastive Learning Recommendation Method with Layer Attention Mechanism[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202330804

    Multi-Perspective Graph Contrastive Learning Recommendation Method with Layer Attention Mechanism

    • Graph contrastive learning is widely employed in recommender system due to its effectiveness in mitigating data sparsity issue. However, most current recommendation algorithms based on graph contrastive learning start to learn from only a single perspective, severely limiting the model’s generalization capability. Furthermore, the over-smoothing problem inherent in graph convolutional networks also affects the model’s stability. Based on this, we propose the multi-perspective graph contrastive learning recommendation method with layer attention mechanism. On the one hand, this method proposes three contrastive learning approaches from two different perspectives. From a view-level perspective, it constructs perturbation-enhanced view by adding random noise for the original graph and employing singular value decomposition (SVD) recombination to establish SVD-enhanced view. It then performs view-level contrastive learning on these two enhanced views. From a node-level perspective, it conducts contrastive learning on candidate nodes and candidate structural neighbors using semantic information between nodes, optimizes multi-task learning with three contrastive auxiliary tasks and a recommendation task to enhance the quality of node embeddings, thereby improving the model’s generalization ability. On the other hand, in the context of learning for user and item node embeddings by graph convolutional network, a layer attention mechanism is employed to aggregate the final node embeddings. This enhances the model’s higher-order connectivity and mitigates the over-smoothing issue. When compared with ten classic models on four publicly available datasets, such as LastFM, Gowalla, Ifashion, and Yelp, the results indicate that this method achieves an average improvement of 3.12% in Recall, 3.22% in Precision, and 4.06% in NDCG. This demonstrates the effectiveness of the approach proposed in this work.
    • loading

    Catalog

      Turn off MathJax
      Article Contents

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return