-
摘要:
传统跨评分协同过滤范式忽视了目标域中评分密度对用户和项目隐向量精度的影响,导致评分稀疏区域评分预测不够准确. 为克服区域评分密度对评分预测的影响,基于迁移学习思想提出一种跨区域跨评分协同过滤推荐算法(cross-rating collaborative filtering recommendation algorithm,CRCRCF),相对于传统跨评分协同过滤范式,该算法不仅能有效挖掘辅助域重要知识,而且可以挖掘目标域中评分密集区域的重要知识,进一步提升目标域整体,尤其是评分稀疏区域的评分预测精度. 首先,针对用户和项目,分别进行活跃用户和非活跃用户、热门项目和非热门项目的划分. 利用图卷积矩阵补全算法提取目标域活跃用户和热门项目、辅助域中全体用户和项目的隐向量. 其次,对活跃用户和热门项目分别构建基于自教学习的深度回归网络学习目标域和辅助域中隐向量的映射关系. 然后,将映射关系泛化到全局,利用非活跃用户和非热门项目在辅助域上相对较准确的隐向量推导其目标域上的隐向量,依次实现了跨区域映射关系迁移和跨评分的隐向量信息迁移. 最后,以求得的非活跃用户和非热门项目在目标域上的隐向量为约束,提出受限图卷积矩阵补全模型,并给出相应推荐结果. 在MovieLens和Netflix数据集上的仿真实验显示CRCRCF算法较其他最先进算法具有明显优势.
Abstract:Traditional cross-rating collaborative filtering paradigm ignores the influence of rating density in the target domain on the accuracy of user and item latent vectors, resulting in less accurate rating prediction in regions with sparse ratings. To overcome the influence of regional rating density on rating prediction, based on the thought of transfer learning, a cross-region and cross-rating collaborative filtering recommendation algorithm (CRCRCF) is proposed. Compared with the traditional cross-rating collaborative filtering paradigm, CRCRCF algorithm can effectively exploit not only the important knowledge from the auxiliary domain, but also the important knowledge from the rating-dense regions in the target domain, which can further improve the rating prediction accuracy of the whole target domain, especially the rating-sparse regions. Firstly, for users and items, active users and inactive users, popular items and unpopular items are divided respectively. Graph convolution matrix complementation algorithm is used to extract the latent vectors of active users and popular items in the target domain and all users and items in the auxiliary domain. Secondly, for users and items in rating-dense regions, deep regression models based on self-taught learning are constructed to learn the mapping relationships between latent vectors in the target domain and in the auxiliary domain, respectively. Then the mapping relationships are generalized to the whole target domain, and the relatively accurate latent vectors of inactive users and unpopular items in the auxiliary domain are used to derive their latent vectors in the target domain, which achieves the cross-region mapping relationships transfer and cross-rating latent vector information transfer successively. Finally, the restricted graph convolutional matrix completion model is proposed with the obtained latent vectors of inactive users and non-popular items in the target domain as constraints, and the corresponding recommendation results are given. The simulation experiments on MovieLens and Netflix datasets show that the CRCRCF algorithm has obvious advantages over other state-of-the-art algorithms.
-
-
表 1 数据集统计信息
Table 1 Statistics of the Datasets
数据集 域名 用户数 项目数 评分格式 评分个数 评分密度/% ML10M 目标域 5 000 5 000 [0.5, 5]间隔为0.5 253 673 1.01 辅助域 5 000 5 000 {0, 1} 2 536 729 10.15 Netflix 目标域 3 000 3 000 [1, 5]间隔为1 55 024 0.61 辅助域 3 000 3 000 {0, 1} 574 880 6.39 表 2 在Netflix数据集上的GC-MC最优参数取值
Table 2 Optimal Parameters Values of GC-MC for Netflix Dataset
参数 目标域 辅助域 TR90 TR80 TR70 TR60 ρ 0.6 0.7 0.5 0.6 0.6 d 45 45 45 45 80 k 100 500 500 300 700 表 3 Netflix数据集不同阈值下用户侧和项目侧栈式降噪自编码器最优参数
Table 3 The Optimal Parameters of SDAE on User-Side and Item-Side with Different Thresholds for Netflix Dataset
维度 µ1 µ2 5% 10% 15% 20% 25% 30% 5% 10% 15% 20% 25% 30% k2 30 35 40 45 35 45 30 35 30 45 35 50 k3 15 20 10 25 20 25 20 20 25 20 25 15 表 4 ML10M数据集上不同活跃度阈值和热门度阈值对应的MAE值
Table 4 Values of MAE Corresponding to Different Activity Thresholds and Popularity Thresholds for ML10M Dataset
μ1 μ2 MAE MAE均值 μ1 μ2 MAE MAE均值 TE10 TE20 TE30 TE40 TE10 TE20 TE30 TE40 5% 5% 0.6953 0.7011 0.7028 0.7017 0.7002 20% 5% 0.6939 0.7012 0.7045 0.7056 0.7013 5% 10% 0.6969 0.7188 0.7299 0.7443 0.7225 20% 10% 0.6971 0.7202 0.7298 0.7268 0.7185 5% 15% 0.6965 0.7001 0.7036 0.7359 0.7090 20% 15% 0.6966 0.7006 0.7038 0.7086 0.7024 5% 20% 0.6981 0.6998 0.7029 0.7333 0.7085 20% 20% 0.6993 0.6926 0.7013 0.7003 0.6984 5% 25% 0.7001 0.6995 0.7028 0.7456 0.7120 20% 25% 0.6971 0.6992 0.7001 0.7068 0.7008 5% 30% 0.6962 0.6992 0.7013 0.7061 0.7007 20% 30% 0.6912 0.6968 0.6983 0.7055 0.6980 10% 5% 0.6952 0.7006 0.7038 0.7041 0.7009 25% 5% 0.6871 0.7066 0.7026 0.7055 0.7005 10% 10% 0.6953 0.7189 0.7301 0.7512 0.7239 25% 10% 0.6898 0.7202 0.7311 0.7289 0.7175 10% 15% 0.6986 0.7022 0.7038 0.7086 0.7033 25% 15% 0.6995 0.7021 0.7039 0.7286 0.7085 10% 20% 0.7011 0.6995 0.7032 0.6998 0.7009 25% 20% 0.7028 0.7035 0.7151 0.7269 0.7121 10% 25% 0.6698 0.6735 0.6751 0.6826 0.6753 25% 25% 0.7008 0.7019 0.7177 0.7282 0.7122 10% 30% 0.6801 0.6887 0.6866 0.6978 0.6883 25% 30% 0.6987 0.7026 0.7089 0.7198 0.7075 15% 5% 0.6986 0.7033 0.7038 0.7046 0.7026 30% 5% 0.6925 0.6995 0.7249 0.7272 0.7110 15% 10% 0.6956 0.7189 0.7302 0.7669 0.7279 30% 10% 0.6991 0.7193 0.7297 0.7289 0.7193 15% 15% 0.6985 0.7012 0.6992 0.7063 0.7013 30% 15% 0.7001 0.6986 0.7031 0.7082 0.7025 15% 20% 0.6978 0.697 0.6993 0.7051 0.6998 30% 20% 0.7011 0.6956 0.7115 0.7253 0.7084 15% 25% 0.7003 0.699 0.7022 0.7101 0.7029 30% 25% 0.7026 0.7001 0.7058 0.7088 0.7043 15% 30% 0.7017 0.7008 0.7029 0.7089 0.7036 30% 30% 0.7088 0.7001 0.7056 0.7086 0.7058 注:黑体数值表示在ML10M数据集上最优阈值组合结果. 表 5 Netflix数据集上不同活跃度阈值和热门度阈值对应的MAE值
Table 5 Values of MAE Corresponding to Different Activity Thresholds and Popularity Thresholds for Netflix Dataset
μ1 μ2 MAE MAE均值 μ1 μ2 MAE MAE均值 TE10 TE20 TE30 TE40 TE10 TE20 TE30 TE40 5% 5% 0.8343 0.8396 0.8621 0.8811 0.8543 20% 5% 0.8085 0.8101 0.8286 0.8305 0.8194 5% 10% 0.8328 0.8369 0.8699 0.8655 0.8513 20% 10% 0.8026 0.8035 0.8202 0.8245 0.8127 5% 15% 0.8288 0.8403 0.8698 0.8683 0.8518 20% 15% 0.8345 0.8315 0.8398 0.8446 0.8376 5% 20% 0.8281 0.8388 0.8679 0.8856 0.8551 20% 20% 0.8356 0.8337 0.8406 0.8419 0.8380 5% 25% 0.8372 0.8368 0.8403 0.8919 0.8516 20% 25% 0.8329 0.8353 0.8405 0.8478 0.8391 5% 30% 0.8351 0.8326 0.8788 0.8968 0.8608 20% 30% 0.8219 0.8347 0.8302 0.8418 0.8322 10% 5% 0.8301 0.8322 0.8359 0.8826 0.8452 25% 5% 0.8303 0.8329 0.8377 0.8326 0.8334 10% 10% 0.8281 0.8306 0.8399 0.887 0.8464 25% 10% 0.8326 0.8356 0.8386 0.8578 0.8412 10% 15% 0.8306 0.8329 0.8782 0.8699 0.8529 25% 15% 0.8313 0.8369 0.8403 0.8609 0.8424 10% 20% 0.8349 0.8311 0.8409 0.8618 0.8422 25% 20% 0.8301 0.8298 0.8377 0.8499 0.8369 10% 25% 0.8326 0.8345 0.8591 0.8687 0.8487 25% 25% 0.8336 0.8368 0.8402 0.8587 0.8423 10% 30% 0.8335 0.8338 0.8369 0.8651 0.8423 25% 30% 0.8324 0.8359 0.8371 0.8524 0.8395 15% 5% 0.8276 0.8352 0.8349 0.8401 0.8345 30% 5% 0.8349 0.8336 0.8382 0.8809 0.8469 15% 10% 0.8303 0.8346 0.8386 0.8343 0.8345 30% 10% 0.8321 0.8359 0.8411 0.8863 0.8489 15% 15% 0.8306 0.8329 0.8399 0.8468 0.8376 30% 15% 0.8312 0.8355 0.8401 0.8869 0.8484 15% 20% 0.8293 0.8297 0.8421 0.8295 0.8327 30% 20% 0.8311 0.8326 0.8403 0.8717 0.8439 15% 25% 0.8335 0.8329 0.8386 0.8398 0.8362 30% 25% 0.8346 0.8298 0.8387 0.8933 0.8491 15% 30% 0.8278 0.8353 0.8388 0.8601 0.8405 30% 30% 0.8367 0.8477 0.8609 0.8971 0.8606 注:黑体数值表示在Netflix数据集上最优阈值组合结果. 表 6 ML10M数据集上不同算法的MAE和RMSE值
Table 6 MAE and RMSE Values of Different Algorithms on ML10M Dataset
算法 MAE RMSE p值 TE10 TE20 TE30 TE40 TE10 TE20 TE30 TE40 GC-MC 0.7807 0.7819 0.7913 0.7965 0.9971 0.9994 1.0098 1.0146 0.0048 CSVD 0.7101 0.7180 0.7285 0.7294 0.9110 0.9189 0.9318 0.9359 0.0039 TMF 0.7207 0.7240 0.7337 0.7387 0.9272 0.9290 0.9414 0.9475 0.0031 DLSCF-S 0.7069 0.7105 0.7181 0.7186 0.9063 0.9084 0.9178 0.9195 0.0049 EKT 0.7147 0.7174 0.7238 0.7260 0.9147 0.9182 0.9219 0.9313 0.0040 CRCRCFsv 0.7266 0.7293 0.7382 0.7402 0.9301 0.9328 0.9459 0.9517 0.0027 CRCRCFdirect 0.7133 0.7192 0.7301 0.7299 0.9151 0.9223 0.9339 0.9377 0.0037 CRCRCF(本文) 0.6698 0.6735 0.6751 0.6826 0.8607 0.8756 0.8792 0.8863 注:黑体数值表示在ML10M数据集上最优的性能指标数据,下划线数字表示次优的性能指标数据. 表 7 Netflix数据集上不同算法的MAE和RMSE值
Table 7 MAE and RMSE Values of Different Algorithms on Netflix Dataset
算法 MAE RMSE p值 TE10 TE20 TE30 TE40 TE10 TE20 TE30 TE40 GC-MC 0.9037 0.9108 0.9113 0.9347 1.1218 1.1362 1.1383 1.1669 0.0069 CSVD 0.8462 0.8522 0.8574 0.8656 1.0651 1.0728 1.0756 1.0885 0.0038 TMF 0.8740 0.8776 0.8825 0.9005 1.0982 1.1106 1.1177 1.1402 0.0016 DLSCF-S 0.8413 0.8451 0.8491 0.8617 1.0533 1.0626 1.0657 1.0802 0.0045 EKT 0.8438 0.8511 0.8526 0.8634 1.0587 1.0699 1.0697 1.0857 0.0041 CRCRCFsv 0.8782 0.8815 0.8876 0.9028 1.1036 1.1219 1.1265 1.1431 0.0014 CRCRCFdirect 0.8498 0.8571 0.8599 0.8682 1.0694 1.0806 1.0812 1.0921 0.0034 CRCRCF(本文) 0.8026 0.8035 0.8202 0.8245 1.0062 1.0078 1.0239 1.0276 注:黑体数值表示在Netflix数据集上最优的性能指标数据,下划线数字表示次优的性能指标数据. 表 8 ML10M数据集评分非密集区域上不同算法的MAE和RMSE值
Table 8 MAE and RMSE Values of Different Algorithms on Non-Rating-Dense Region of ML10M Dataset
算法 MAE RMSE p值 TE10 TE20 TE30 TE40 TE10 TE20 TE30 TE40 GC-MC 0.8198 0.8209 0.8292 0.8415 1.0391 1.0442 1.0497 1.0659 0.0021 CSVD 0.7434 0.7453 0.7572 0.7670 0.9528 0.9543 0.9682 0.9845 0.0023 TMF 0.7500 0.7612 0.7771 0.7781 0.9547 0.9774 0.9981 1.0255 0.0015 DLSCF-S 0.7355 0.7401 0.7499 0.7545 0.9379 0.9460 0.9585 0.9742 0.0030 EKT 0.7399 0.7417 0.7551 0.7623 0.9493 0.95 0.9652 0.9803 0.0026 CRCRCFsv 0.7588 0.7695 0.7809 0.7851 0.9628 0.9856 1.0049 1.0345 0.0012 CRCRCFdirect 0.7466 0.7437 0.7601 0.7699 0.9572 0.9513 0.9721 0.9887 0.0022 CRCRCF(本文) 0.6795 0.6846 0.6852 0.6931 0.8789 0.8801 0.9025 0.9133 注:黑体数值表示在ML10M数据集评分非密集区域上最优的性能指标数据,下划线数字表示次优的性能指标数据. 表 9 Netflix数据集评分非密集区域上不同算法的MAE和RMSE值
Table 9 MAE and RMSE Values of Different Algorithms on Non-Rating-Dense Region of Netflix Dataset
算法 MAE RMSE p值 TE10 TE20 TE30 TE40 TE10 TE20 TE30 TE40 GC-MC 0.9351 0.9457 0.9768 1.0311 1.1948 1.1976 1.3394 1.3561 0.0033 CSVD 0.8956 0.8977 0.9090 0.9153 1.1137 1.1181 1.1293 1.1364 0.0027 TMF 0.8977 0.9100 0.9245 0.9321 1.1290 1.1470 1.1477 1.1508 0.0019 DLSCF-S 0.8884 0.8946 0.9047 0.9117 1.1102 1.1165 1.1254 1.1321 0.0030 EKT 0.8894 0.8963 0.9081 0.9131 1.1120 1.1167 1.1284 1.1341 0.0029 CRCRCFsv 0.9029 0.9201 0.9333 0.9412 1.1368 1.1608 1.1567 1.1601 0.0015 CRCRCFdirect 0.9022 0.9089 0.9205 0.9238 1.1221 1.1312 1.1421 1.1355 0.0021 CRCRCF(本文) 0.8221 0.8293 0.8380 0.8429 1.0511 1.0572 1.0629 1.0683 注:黑体数值表示在Netflix数据集评分非密集区域上最优的性能指标数据,下划线数字表示次优的性能指标数据. -
[1] Anelli V W, Bellogín A, Noia T D, et al. Reenvisioning the comparison between neural collaborative filtering and matrix factorization[C]//Proc of the 15th ACM Conf on Recommender Systems. New York: ACM, 2021: 521−529
[2] 陈碧毅,黄玲,王昌栋,等. 融合显式反馈与隐式反馈的协同过滤推荐算法[J]. 软件学报,2020,31(3):794−805 Chen Biyi, Huang Ling, Wang Changdong, et al. Explicit and implicit feedback based collaborative filtering algorithm[J]. Journal of Software, 2020, 31(3): 794−805(in Chinese)
[3] Du Min, Christensen R, Zhang Wei, et al. Pcard: Personalized restaurants recommendation from card payment transaction records[C]//Proc of the 28th World Wide Web Conf. New York: ACM, 2019: 2687−2693
[4] Gao Yuanning, Gao Xiaofeng, Li Xianyue, et al. An embedded GRASP-VNS based two-layer framework for tour recommendation[J]. IEEE Transactions on Services Computing, 2022, 15(2): 847−859 doi: 10.1109/TSC.2019.2963026
[5] 张玉洁,董政,孟祥武. 个性化广告推荐系统及其应用研究[J]. 计算机学报,2021,44(3):531−563 doi: 10.11897/SP.J.1016.2021.00531 Zhang Yujie, Dong Zheng, Meng Xiangwu. Research on personalized advertising recommendation systems and their applications[J]. Chinese Journal of Computers, 2021, 44(3): 531−563 (in Chinese) doi: 10.11897/SP.J.1016.2021.00531
[6] Pan Weike, Liu N N, Xiang W E, et al. Transfer learning to predict missing ratings via heterogeneous user feedbacks[C]//Proc of the 22nd Int Joint Conf on Artificial Intelligence. San Francisco, CA: Morgan Kaufmann, 2011: 2318−2323
[7] Pan Weike, Ming Zhong. Interaction-rich transfer learning for collaborative filtering with heterogeneous user feedback[J]. IEEE Intelligent Systems, 2014, 29(6): 48−54 doi: 10.1109/MIS.2014.2
[8] Pan Weike, Xia Shanchuan, Liu Zhuode, et al. Mixed factorization for collaborative recommendation with heterogeneous explicit feedbacks[J]. Information Sciences, 2016, 332(C): 84−93
[9] Zhang Hongwei, Kong Xiangwei, Zhang Yujia. Enhanced knowledge transfer for collaborative filtering with multi-source heterogeneous feedbacks[J]. Multimedia Tools and Applications, 2021, 80(16): 24245−24270 doi: 10.1007/s11042-021-10834-y
[10] Jiang Shuhui, Ding Zhengming, Fu Yun. Heterogeneous recommendation via deep low-rank sparse collective factorization[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019, 42(5): 1097−1111
[11] Berg R, Kipf T N, Welling M. Graph convolutional matrix completion[C]//Proc of the 24th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. New York: ACM, 2018: 974−983
[12] Raina R, Battle A J, Lee H, et al. Self-taught learning: Transfer learning from unlabeled data[C]//Proc of the 24th Int Conf on Machine Learning. New York: ACM, 2007: 759−766
[13] Li Bin, Yang Qiang, Xue Xiangyang. Can movies and books collaborate? Cross-domain collaborative filtering for sparsity reduction[C]//Proc of the 21st Int Joint Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2009: 2052−2057
[14] Li Bin, Yang Qiang, Xue Xiangyang. Transfer learning for collaborative filtering via a rating-matrix generative model[C]//Proc of the 26th Int Conf on Machine Learning. New York: ACM, 2009: 617−624
[15] Zhang Qian, Hao Peng, Lu Jie, et al. Cross-domain recommendation with semantic correlation in tagging systems[C/OL]//Proc of the 26th Int Joint Conf on Neural Networks. Piscataway, NJ: IEEE, 2019 [2023-10-18].https://doi.org/10.1109/IJCNN.2019.8852049
[16] Li Yakun, Ren Jiadong, Liu Jiaomin, et al. Deep sparse autoencoder prediction model based on adversarial learning for cross-domain recommendations[J]. Knowledge-Based Systems, 2021, 220: 106948 doi: 10.1016/j.knosys.2021.106948
[17] Jiang Meng, Cui Peng, Yuan J N, et al. Little is much: Bridging cross-platform behaviors through overlapped crowds[C]//Proc of the 30th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2016: 13−19
[18] Zhang Qian, Lu Jie, Wu Dianshuang, et al. A cross-domain recommender system with kernel-induced knowledge transfer for overlapping entities[J]. IEEE Transactions on Neural Networks and Learning Systems, 2018, 30(7): 1998−2012
[19] Zhu Feng, Wang Yan, Chen Chaochao, et al. A graphical and attentional framework for dual-target cross-domain recommendation[C]//Proc of the 29th Int Joint Conf on Artificial Intelligence. San Francisco, CA: Morgan Kaufmann, 2020: 3001−3008
[20] Li Pan, Tuzhilin A. Dual metric learning for effective and efficient cross-domain recommendations[J]. IEEE Transactions on Knowledge and Data Engineering, 2021, 35(1): 321−334
[21] Berkovsky S, Kuflik T, Ricci F. Cross-domain mediation in collaborative filtering[C]//Proc of the 11th Int Conf on User Modeling Conf. Berlin: Springer, 2007: 355−359
[22] Resnick P, Iacovou N, Suchak M, et al. Grouplens: An open architecture for collaborative filtering of netnews[C]//Proc of the ACM Conf on Computer Supported Cooperative Work. New York: ACM, 1994: 175−186
[23] Singh A P, Gordon G J. Relational learning via collective matrix factorization[C]//Proc of the 14th ACM SIGKDD Int Conf on Knowledge Discovery and Data Mining. New York: ACM, 2008: 650−658
[24] Hu Liang, Cao Jian, Xu Guandong, et al. Personalized recommendation via cross-domain triadic factorization[C]//Proc of the 22nd Int Conf on World Wide Web. New York: ACM, 2013: 595−606
[25] Loni B, Shi Yue, Larson M A, et al. Cross-domain collaborative filtering with factorization machines[C]//Proc of the 36th European Conf on Information Retrieval. Berlin: Springer, 2014: 656−661
[26] Yuan Feng, Yao Lina, Benatallah B. DARec: Deep domain adaptation for cross-domain recommendation via transferring rating patterns[C]//Proc of the 28th Int Joint Conf on Artificial Intelligence. San Francisco, CA: Morgan Kaufmann, 2019: 4227−4233
[27] Yu Xu, Chu Yan, Jiang Feng, et al. SVMs classification based two-side cross domain collaborative filtering by inferring intrinsic user and item features[J]. Knowledge-Based Systems, 2018, 141: 80−91 doi: 10.1016/j.knosys.2017.11.010
[28] Yu Xu, Jiang Feng, Du Junwei, et al. A cross-domain collaborative filtering algorithm with expanding user and item features via the latent factor space of auxiliary domains[J]. Pattern Recognition, 2019, 94: 96−109 doi: 10.1016/j.patcog.2019.05.030
[29] Pan Weike, Xiang E A, Liu N N, et al. Transfer learning in collaborative filtering for sparsity reduction[C]//Proc of the 24th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2010: 230−235
[30] Yu Xu, Zhan Dingjia, Liu Lei, et al. A privacy-preserving cross-domain healthcare wearables recommendation algorithm based on domain-dependent and domain-independent feature fusion[J]. IEEE Journal of Biomedical and Health Informatics, 2022, 26(5): 1928−1936 doi: 10.1109/JBHI.2021.3069629
[31] Kingma D P, Ba J. Adam: A method for stochastic optimization[J]. arXiv preprint, arXiv: 1412.6980, 2015
[32] Vincent P, Larochelle H, Lajoie I, et al. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[J]. Journal of Machine Learning Research, 2010, 11(12): 3371−3408
[33] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors[J]. Nature, 1986, 323(6088): 533−536 doi: 10.1038/323533a0
-
期刊类型引用(2)
1. 李光. 基于区块链技术的建筑工程质量管理策略. 中国建筑装饰装修. 2025(02): 75-77 . 百度学术
2. Jing He,Xiaofeng Ma,Dawei Zhang,Feng Peng. Supervised and revocable decentralized identity privacy protection scheme. Security and Safety. 2024(04): 113-135 . 必应学术
其他类型引用(1)