MB-HGCN: A Hierarchical Graph Convolutional Network for Multi-behavior Recommendation“CCIR 2024推荐”[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440770
Citation:
MB-HGCN: A Hierarchical Graph Convolutional Network for Multi-behavior Recommendation“CCIR 2024推荐”[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440770
MB-HGCN: A Hierarchical Graph Convolutional Network for Multi-behavior Recommendation“CCIR 2024推荐”[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440770
Citation:
MB-HGCN: A Hierarchical Graph Convolutional Network for Multi-behavior Recommendation“CCIR 2024推荐”[J]. Journal of Computer Research and Development. DOI: 10.7544/issn1000-1239.202440770
Collaborative filtering-based recommender systems that only rely on single-behavior data often encounter serious sparsity problems in practical applications, resulting in poor performance. Multi-behavior Recommendation (MBR) is a method that seeks to learn user preferences, represented as vector embeddings, from auxiliary behavior interaction data. By leveraging these preferences for target behavior recommendations, MBR can mitigate the data sparsity challenge and enhances predictive precision for recommendations. This research introduces MB-HGCN, a novel recommendation model designed to exploit multi-behavior data. The model leverages a hierarchical graph convolutional network to learn user and item embeddings from a coarse-grained global level to a fine-grained behavior-specific level. Our model learns global embeddings from a unified homogeneous graph constructed by the interactions of all behaviors, which are then used as initialized embeddings for behavior-specific embedding learning in each behavior graph. Moreover, we also emphasize the distinct of the user and item behavior-specific embeddings and design two simple-yet-effective strategies to aggregate the behavior-specific embeddings for users and items, respectively. Finally, we adopt multi-task learning for optimization. Extensive experimental results on three real-world benchmark datasets show that our MB-HGCN model can substantially outperform the state-of-the-art methods, achieving a relative improvement of 73.93% and 74.21% for HR@10 and NDCG@10, respectively, on the Tmall datasets.