ISSN 1000-1239 CN 11-1777/TP

Journal of Computer Research and Development ›› 2021, Vol. 58 ›› Issue (11): 2524-2537.doi: 10.7544/issn1000-1239.2021.20200564

Previous Articles     Next Articles

Shared-Account Cross-Domain Sequential Recommendation with Self-Attention Network

Guo Lei1, Li Qiuju1, Liu Fang’ai2, Wang Xinhua2   

  1. 1(School of Business, Shandong Normal University, Jinan 250358) 2(School of Information Science and Engineering, Shandong Normal University, Jinan 250358)
  • Online:2021-11-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (61602282, 61772321) and the China Postdoctoral Science Foundation (2016M602181).

Abstract: Shared-account cross-domain sequential recommendation (SCSR) is the task of recommending next items in a particular context, where users share a single account, and their behavior records are available in multiple domains. Compared with traditional sequential recommendation tasks, SCSR is challenging due to: 1) The interactions generated by an account is a mixture of multiple users. 2) The behaviors in one domain might be helpful to improve recommendations in another domain. Recently,most of the related work is based on recurrent neural network(RNN). Due to the inherent drawbacks of RNN, RNN-based methods are time consuming and more importantly they fail to capture long-range dependencies of accounts’ interactions. In this work, we target at SCSR and propose a self-attention-based cross-domain recommendation model(SCRM) to address these two challenges. Specifically, to model the mixed interactions from multiple users of a single account, a multi-head self-attention network is first introduced. Then, to leverage the domain information in one domain to improve the recommendation in another domain, the cross-domain transfer network based on a multi-layer cross-map perceptual network is innovatively proposed. Finally, a hybrid recommendation decoder is explored to consider the information from both domains to achieve recommendation in each domain. We conduct experiments on a real-world dataset HVIDEO, and the experimental results show that SCRM outperforms all the baseline methods in terms of MRR and Recall. In terms of training efficiency, SCRM achieves shorter training and learning time than RNN-based methods.

Key words: multi-head self-attention, shared-account recommendation, cross-domain recommendation, sequence modeling, collaborative filtering

CLC Number: