-
摘要:
查询建议是当今搜索引擎必不可少的一个组成部分,它可以在用户输入完整查询前提供查询候选项,帮助用户更准确、更快速地表达信息需求. 深度学习技术有助于提升查询建议的准确度,成为近年来推动查询建议发展的主流技术. 主要对基于深度学习的查询建议研究现状进行归纳整理与分析对比,根据深度学习应用阶段不同,把其分为生成式查询建议与排名式查询建议2类,分析其中每种模型的建模思路和处理特征. 此外还介绍了查询建议领域常用的数据集、基线方法与评价指标,并对比其中不同模型的技术特点与实验结果. 最后总结了基于深度学习的查询建议研究目前面临的挑战与未来发展趋势.
Abstract:Query suggestion (QS) is an indispensable part of search engines. It can provide query candidates before users entering a complete query to help express their information needs more accurately and more quickly. Deep learning helps to improve the accuracy of QS and it has become the mainstream technology to promote the development of QS in recent years. We mainly summarize, analyze and compare the research status of deep learning based QS (DQS). According to the different application stages of deep learning, DQS methods are divided into two categories: generative QS methods and ranking-based QS suggestion methods, and the modeling ideas of each model are analyzed. In addition, the data sets, baselines and evaluation indexes commonly used in the field of QS are introduced, and the technical characteristics and experimental results of different models are compared. Finally, the current challenges and future development trends of QS research based on deep learning are summarized.
-
-
表 1 常用基线方法
Table 1 Common Baseline Methods
表 2 基于神经语言模型的DQS方法实验对比
Table 2 Experimental Comparison of Neural Language Model Based DQS Method
方法 关键技术 数据集 数据集处理 评价指标 数值 NQLM[11] 词级语言模型 AOL ①删除出现少于3次的查询和过长的查询;②遵循文献[45]对查询进行拆分 MRR 0.355 NQAC[12] 通过GRU建模用户因素、时间因素 AOL 生
物医学
数据集①删除出现少于3次的查询和过长的查询;②遵循文献[45]对查询进行拆分 MRR 0.382 文献[13] 字符级
语言模型AOL Amazon ①删除过长的查询;②将测试集中的完整查询拆分为查询前缀与后缀;③按时间戳拆分查询以模拟真实情况 Hit Rate 0.448 Factor-
Cell[14]自适应
权重矩阵AOL ①将测试集中的完整查询拆分为查询前缀与后缀(前缀至少包含2个字符,后缀至少包含1个字符) MRR 0.309 注:对于有多数据集的研究,本表格仅记录AOL数据集上的实验数据. 表 3 基于编码器-解码器模型的DQS方法的实验对比
Table 3 Experimental Comparison of Encoder-Decoder Model Based DQS Method
方法 分类 关键技术 数据集 数据集处理 评价指标 数值 HRED[20] 基于查询会话的方法 层次化建模会话因素 AOL 以30 min空闲时间为会话边界,将数据集划分为多个会话. MRR 0.575 M-NSRF[21] 多任务学习、具有最大
池化层的双向LSTMAOL 同文献[19],将数据集划分为多个会话. MRR 0.238 HCARNN[23] 注意力机制 百度地图 将1位用户1天内的查询作为查询会话,对数据集进行划分. MRR 0.138 文献[24] 个性化编码策略 LINKEDIN 将1位用户较短时间内的查询作为查询会话,
依此对数据集进行划分.CTR +5.62% ACG[26] 基于查询重构的方法 注意力机制、ACG机制 AOL 同文献[19],将数据集划分为多个会话. MRR 0.594 RIN[28] 注意力机制、多学习 AOL ①同文献[19],将数据集划分为多个会话;
②按查询次数将测试集划分长中短会话.MRR 0.825 AHNQS[29] 基于长期查询的方法 层次化模型、注意力机制 AOL ①同文献[19],将数据集划分为多个会话;
②删除出现次数少于 20 的查询,并保留数据集中长度大于 5 的会话以及至少有 5 个会话的用户.MRR 0.851 FMN[30] 基于用户点击行为的方法 反馈记忆网络 Sogou ①同文献[19],将数据集划分为多个会话;
②将会话中最后一个查询视为正确的查询建议;
③将点击的标题视为文档内容.MRR 0.581 HAN[31] 层次化模型+注意力机制 ①同文献[19],将数据集划分为多个会话;
②将点击的标题视为文档内容.MRR 0.604 CARS[32] 注意力机制、多任务学习 AOL 利用BM25[51]补充数据集中用户获取的文档列表. MRR 0.542 M2A[33] Transformer、多任务学习 TaoBao 无特殊处理. MRR 0.579 QQS[34] Maxout pointer机制 SQuAD 对关键词进行提取、合并、定位. 人工评估 0.33/0.4/0.67
(Google/
Bing/QQS)SERP[37] 动态编码器 arXiv ①去除文档中的符号、数字;
②删除过于常见、过于罕见以及过短的查询.人工评估(precision@5) 0.9 DFSM[38] Dynamic Flow Boss直聘 同文献[19],将数据集划分为多个会话 MRR和NDCG等 注:对于有多数据集的研究,仅记录AOL数据集上的实验数据. -
[1] Bar-Yossef Z, Kraus N. Context-sensitive query auto-completion[C]//Proc of the 20th Int Conf on World Wide Web. New York: ACM, 2011: 107−116
[2] 田萱,张骁,孟祥光,等. 时间敏感查询词补全关键技术研究综述[J]. 电子学报,2015,43(6):1160−1168 doi: 10.3969/j.issn.0372-2112.2015.06.018 Tian Xuan, Zhang Xiao, Meng Xiangguang, et al. Research Review of time-sensitive query auto-completion technique[J]. Acta Electronica Sinica, 2015, 43(6): 1160−1168 (in Chinese) doi: 10.3969/j.issn.0372-2112.2015.06.018
[3] Tahery S, Farzi S, et al. Customized query auto-completion and suggestion – A review[J]. Information Systems, 2020, 87(1): 101415–101432
[4] Lecun Y, Bengio Y, Hinton G. Deep learning[J]. Nature, 2015, 521(7553): 436−444 doi: 10.1038/nature14539
[5] Shokouhi M. Learning to personalize query auto-completion[C]//Proc of the 36th Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2013: 103−112
[6] Hu Sheng, Xiao Chuan, Ishikawa Y. An efficient algorithm for location-aware query autocompletion[J]. IEICE Transactions on Information and Systems, 2018, 101(1): 181−192
[7] Huang Zhipeng, Mamoulis N. Location-aware query recommendation for search engines at scale[C]//Proc of the 15th Int Symp on Advances in Spatial and Temporal Databases. Berlin: Springer, 2017: 203−220
[8] Qi Shuyao, Wu Dingming, Mamoulis N. Location aware keyword query suggestion based on document proximity[J]. IEEE Transactions on Knowledge and Data Engineering, 2015, 28(1): 82−97
[9] Kannadasan M R, Aslanyan G. Personalized query auto-completion through a lightweight representation of the user context[J]. arXiv preprint, arXiv: 1905.01386, 2019
[10] Jiang J Y, Ke Y Y, Chien P Y, et al. Learning user reformulation behavior for query auto-completion[C]//Proc of the 37th Int ACM SIGIR Conf on Research & Development in Information Retrieval. New York: ACM, 2014: 445−454
[11] Park D H, Chiba R. A neural language model for query auto-completion[C]//Proc of the 40th Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2017: 1189−1192
[12] Fiorini N, Lu Zhiyong. Personalized neural language models for real-world query auto completion[J]. arXiv preprint, arXiv: 1804.06439, 2018
[13] Wang Powei, Zhang Huan, Mohan V, et al. Realtime query completion via deep language models[C]//Proc of the 41st Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2018, 2319−2327
[14] Jaech A, Ostendorf M. Personalized language model for query auto-completion[J]. arXiv preprint, arXiv: 1804.09661, 2018
[15] Jaech A, Ostendorf M. Low-rank RNN adaptation for context-aware language modeling[J]. Transactions of the Association for Computational Linguistics, 2017, 6(1): 497−510
[16] Shokouhi M, Radinsky K. Time-sensitive query auto-completion[C]//Proc of the 35th Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2012: 601−610
[17] Vijayakumar A K, Cogswell M, Selvaraju R R, et al. Diverse beam search: Decoding diverse solutions from neural sequence models[J]. arXiv preprint, arXiv: 1610.02424, 2016
[18] Gabín J, Ares M E, Parapar J. Keyword embeddings for query suggestion[C]//Proc of the 45th European Conf on Information Retrieval. Berlin: Springer, 2023: 346−360
[19] Mustar A, Lamprier S, Piwowarski B. On the study of transformers for query suggestion[J]. ACM Transactions on Information System, 2022, 40(1): 1−18, 27
[20] Sordoni A, Bengio Y, Vahabi H, et al. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion[C]//Proc of the 24th ACM Int Conf on Information and Knowledge Management. New York: ACM, 2015: 553−562
[21] Ahmad W U, Chang K W. Multi-task learning for document ranking and query suggestion[C/OL]//Proc of the 6th Int Conf on Learning Representations. 2018[2022-12-11]. https://openreview.net/pdf?id=SJ1nzBeA-
[22] Conneau A, Kiela D, Schwenk H, et al. Supervised learning of universal sentence representations from natural language inference data[C]//Proc of the 2017 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2017: 670−680
[23] Song Jun, Xiao Jun, Fei Wu, et al. Hierarchical contextual attention recurrent neural network for map query suggestion[J]. IEEE Transactions on Knowledge & Data Engineering, 2017, 29(9): 1888−1901
[24] Zhong Jianling, Guo Weiwei, Gao Huiji, et al. Personalized query suggestions[C]//Proc of the 43rd Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2020: 1645−1648
[25] Luong M T, Pham H, Manning C D. Effective approaches to attention-based neural machine translation[C]//Proc of the 2015 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2015: 1412−1421
[26] Dehghani M, Rothe S, Alfonseca E, et al. Learning to attend, copy, and generate for session-based query suggestion[C]//Proc of the 26th ACM on Conf on Information and Knowledge Management. New York: ACM, 2017: 1747−1756
[27] Bahdanau D, Cho K, Bengio Y. Neural Machine Translation by Jointly Learning to Align and Translate[C/OL]//Proc of the 6th Int Conf on Learning Representations. 2015[2021-04-22]. https://arxiv.org/abs/1409.0473
[28] Jiang Jun Yu, Wang Wei. RIN: Reformulation inference network for context-aware query suggestion[C]//Proc of the 27th ACM Int Conf on Information and Knowledge Management. New York: ACM, 2018: 197−206
[29] Chen Wanyu, Cai Fei, Chen Honghui, et al. Attention-based hierarchical neural query suggestion[C]//Proc of the 41st Int ACM SIGIR Conf on Research & Development in Information Retrieval. New York: ACM, 2018: 1093−1096
[30] Wu Bin, Xiong Chenyan, Sun Maosong, et al. Query suggestion with feedback memory network[C]//Proc of the 26th World Wide Web Conf. New York: ACM, 2018: 1563−1571
[31] Li Xiangsheng, Liu Yiqun, Li Xin, et al. Hierarchical attention network for context-aware query suggestion[C]//Proc of the 14th Asia Information Retrieval Societies Conf. Berlin: Springer, 2018: 173−186
[32] Ahmad W U, Chang Kaiwei, Wang Hongning. Context attentive document ranking and query suggestion[C]//Proc of the 42nd Int ACM SIGIR Conf on Research and Development in Information Retrieval. New York: ACM, 2019: 385−394
[33] Yin Din, Tan Jiwei, Zhang Zhe, et al. Learning to generate personalized query auto-completions via a multi-view multi-task attentive approach[C]//Proc of the 26th ACM SIGKDD Int Conf on Knowledge Discovery & Data Mining. New York: ACM, 2020: 2998−3007
[34] He Yuxin, Mao Xianling, Wei Wei, et al. Question-formed query suggestion[C]//Proc of the 12th IEEE Int Conf on Big Knowledge. Piscataway, NJ: IEEE, 2021: 482−489
[35] Rajpurkar P, Zhang Jian, Lopyrev K, et al. SQuAD: 100, 000+ questions for machine comprehension of text[C]//Proc of the 2016 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2016: 2383–2392
[36] Zhao Yao, Ni Xiaochuan, Ding Yuanyuan, et al. Paragraph-level neural question generation with maxout pointer and gated self-attention networks[C]//Proc of the 2018 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2018: 3901−3910
[37] Medlar A, Li Jing, Głowacka D. Query suggestions as summarization in exploratory search[C]//Proc of the 6th Conf on Human Information Interaction and Retrieval. New York: ACM, 2021: 119−128
[38] Zhou Zile, Zhou Xiao, Li Mingzhe, et al. Personalized query suggestion with searching dynamic flow for online recruitment[C]//Proc of the 31st ACM Int Conf on Information & Knowledge Management. New York: ACM, 2022: 2773−2783
[39] Mustar A, Lamprier S, Piwowarski B. Using BERT and BART for query suggestion[C/OL]//Proc of the 1st Joint Conf of the Information Retrieval Communities in Europe. 2020[2023-02-01]. https://www.irit.fr/CIRCLE/wp-content/uploads/2020/06/CIRCLE20_06.pdf
[40] Kenton J, Toutanova L K. BERT: Pre-training of deep bidirectional transformers for language understanding[C]//Proc of the 2019 Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Stroudsburg, PA: ACL, 2019: 4171−4186
[41] Lewis M, Liu Yinhan, Goyal N, et al. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[C]//Proc of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 7871−7880
[42] Zhu Yukun, Kiros R, Zemel R, et al. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books[C]//Proc of the 2015 IEEE Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2015: 19−27
[43] Raffel C, Shazeer N, Roberts A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. The Journal of Machine Learning Research, 2020, 21(1): 5485−5551
[44] Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Advances in Neural Information Processing Systems, 2017, 30(1): 5998−6008
[45] Burges C J C. From ranknet to lambdarank to lambdamart: An overview[C/OL]. 2010[2022-02-02]. https://www.microsoft.com/en-us/research/uploads/prod/2016/02/MSR-TR-2010-82.pdf
[46] Mitra B, Craswell N. Query auto-completion for rare prefixes[C]//Proc of the 24th ACM Int Conf on Information and Knowledge Management. New York: ACM, 2015: 1755−1758
[47] Shen Yelong, He Xiaodong, Gao Jianfeng, et al. Learning semantic representations using convolutional neural networks for web search[C]//Proc of the 23rd Int Conf on World Wide Web. New York: ACM, 2014: 373−374
[48] Wang Sida, Guo Weiwei, Gao Huiji, et al. Efficient neural query auto completion[C]//Proc of the 29th ACM Int Conf on Information & Knowledge Management. New York: ACM, 2020: 2797−2804
[49] Sethy A, Chen S, Arisoy E, et al. Unnormalized exponential and neural network language models[C]//Proc of the 40th IEEE Int Conf on Acoustics, Speech and Signal Processing (ICASSP). Piscataway, NJ: IEEE, 2015: 5416−5420
[50] Pass G, Chowdhury A, Torgeson C. A picture of search[C]//Proc of the 1st Int Conf on Scalable Information Systems. New York: ACM, 2006: 1−7
[51] Robertson S, Zaragoza H. The probabilistic relevance framework: BM25 and beyond[J]. Foundations & Trends in Information Retrieval, 2009, 3(4): 333−389
[52] Wang Liang, Yang Nan, Wei Furu. Query2doc: Query expansion with large language models[J]. arXiv preprint, arXiv: 2303.07678, 2023
[53] Ye Fanghua, Fang Meng, Li Shenghui, et al. Enhancing conversational search: Large language model-aided informative query rewriting[C]//Proc of the 2023 Conf on Empirical Methods in Natural Language Processing, Stroudsburg, PA: ACL, 2023: 5985–6006
[54] Hao Jie, Liu Yang, Fan Xing, et al. CGF: Constrained generation framework for query rewriting in conversational AI[C]//Proc of the 2022 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2022: 475−483
[55] Wu Zeqiu, Luan Yi, Rashkin H, et al. CONQRR: Conversational query rewriting for retrieval with reinforcement learning[C]//Proc of the 2022 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2022: 10000−10014
[56] Qian Hongjin, Dou Zhicheng. Explicit query rewriting for conversational dense retrieval[C]//Proc of the 2022 Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2022: 4725−4737
[57] Yang Dayu, Zhang Yue, Fang Hui. Zero-shot query reformulation for conversational search[C]//Proc of the 2023 ACM SIGIR Int Conf on Theory of Information Retrieval. New York: ACM, 2023: 257−263
[58] Mo Fengran, Mao Kelong, Zhu Yutao, et al. ConvGQR: Generative query reformulation for conversational search[C]//Proc of the 61st Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2023: 4998–5012
[59] Ma Xinbei, Gong Yeyun, He Pengcheng, et al. Query rewriting in retrieval-augmented large language models.[C]//Proc of the 2023 Conf on Empirical Methods in Natural Language Processing, Stroudsburg, PA: ACL, 2023: 5303–5315
[60] Chen Qiwei, Pei Changhua, Lv Shanshan, et al. End-to-end user behavior retrieval in click-through rateprediction model[J]. arXiv preprint, arXiv: 2108.04468, 2021
[61] Palumbo E, Damianou A, Wang A, et al. Graph learning for exploratory query suggestions in an instant search system[C]//Proc of the 32nd ACM Int Conf on Information and Knowledge Management. New York: ACM, 2023: 4780−4786
-
期刊类型引用(68)
1. 陈泽明,方序鸿,李家叶,汪孟尧,陈爱芳,尹玲. 机器学习模型在城市内涝模拟预报中的应用综述. 人民珠江. 2025(01): 9-22 . 百度学术
2. 邱凯乐. 图信号处理综述. 物联网技术. 2025(05): 111-113 . 百度学术
3. 练培格,李英冰,刘波,冯晓珂. 基于多元时间序列动态图神经网络的交通速度预测. 地球信息科学学报. 2025(03): 636-652 . 百度学术
4. 许明家,孙龙,李爽,鲁程鹏. 基于图神经网络的地下水位动态模拟模型. 水文. 2025(01): 30-36 . 百度学术
5. 胡书林,张华军,邓小涛,王征华. 结合依存图卷积的中文文本相似度计算研究. 计算机工程. 2025(03): 76-85 . 百度学术
6. 朱海,谭文安,郭凯. 基于图卷积的云制造服务编码算法. 河南科技大学学报(自然科学版). 2024(01): 43-50+7 . 百度学术
7. 肖国庆,李雪琪,陈玥丹,唐卓,姜文君,李肯立. 大规模图神经网络研究综述. 计算机学报. 2024(01): 148-171 . 百度学术
8. 林晶晶,冶忠林,赵海兴,李卓然. 超图神经网络综述. 计算机研究与发展. 2024(02): 362-384 . 本站查看
9. 谢楠. 内容个性化推荐优化探索. 数字通信世界. 2024(01): 70-72 . 百度学术
10. 柳德云,李莹,周震,吉根林. 基于时空依赖关系和特征融合的弱监督视频异常检测. 数据采集与处理. 2024(01): 204-214 . 百度学术
11. 李挺,金福生,李荣华,王国仁,段焕中,路彦雄. Light-HGNN:用于圈层内容推荐的轻量同质超图神经网络. 计算机研究与发展. 2024(04): 877-888 . 本站查看
12. 杨洁祎 ,董一鸿 ,钱江波 . 基于图神经网络的小样本学习方法研究进展. 计算机研究与发展. 2024(04): 856-876 . 本站查看
13. 胡昊,孙爽,马鑫,李擎,徐鹏. 基于图注意力网络的城市内涝积水预测与研究. 人民黄河. 2024(04): 43-48 . 百度学术
14. 龙志,陈湘州. 基于图注意力LSTM深度学习的季度GDP预测应用. 湖南工程学院学报(社会科学版). 2024(01): 54-64+118 . 百度学术
15. 张陶,廖彬,于炯,李敏,孙瑞娜. 图神经网络节点分类任务基准测试及分析. 计算机科学. 2024(04): 132-150 . 百度学术
16. 刘润雨,贾路楠. 基于分班图神经网络的度不平衡节点分类. 信息技术与信息化. 2024(04): 114-117 . 百度学术
17. 袁立宁,蒋萍,莫嘉颖,刘钊. 基于二阶图卷积自编码器的图表示学习. 计算机工程与应用. 2024(10): 180-187 . 百度学术
18. 侯磊,刘金环,于旭,杜军威. 图神经网络研究综述. 计算机科学. 2024(06): 282-298 . 百度学术
19. 楚小茜,张建辉,张德升,苏珲. 基于改进GraphSAGE算法的浏览器指纹追踪. 计算机科学. 2024(06): 409-415 . 百度学术
20. 刘振威,黄影平,梁振明,杨静怡. 基于点云图卷积神经网络的3D目标检测. 上海理工大学学报. 2024(03): 320-330 . 百度学术
21. 张强,彭骨,薛陈斌. 基于改进图注意力网络的油井产量预测模型. 吉林大学学报(理学版). 2024(04): 933-942 . 百度学术
22. 李平,宋舒寒,张园,曹华伟,叶笑春,唐志敏. HSEGRL:一种分层可自解释的图表示学习模型. 计算机研究与发展. 2024(08): 1993-2007 . 本站查看
23. 焦鹏飞,陈舒欣,郭翾,何东晓,刘栋. 图神经常微分方程综述. 计算机研究与发展. 2024(08): 2045-2066 . 本站查看
24. 王长刚,王先伟,曹宇,李扬,吕琪,张耀心. 基于改进图注意力网络的电力系统脆弱性关键环节辨识. 电力系统保护与控制. 2024(15): 36-45 . 百度学术
25. 李航程,钟勇. 基于多特征驱动图注意卷积网络的关系抽取. 计算机应用. 2024(S1): 24-28 . 百度学术
26. 熊辛,涂志炜,唐韬,闵仕琦,冯雨欣,汤涛,叶海涛. 复杂场景下人体跌倒行为监测模型构建的研究. 中国数字医学. 2024(09): 91-96 . 百度学术
27. 李鑫,陆伟,马召祎,朱攀,康彬. 基于图注意力和改进Transformer的节点分类方法. 电子学报. 2024(08): 2799-2810 . 百度学术
28. 冯拓宇,刘佳宁,曹子奇,郭静,杨云祥. 社区发现方法研究综述. 中国电子科学研究院学报. 2024(06): 487-498+503 . 百度学术
29. 贺鸣,郭熹,秦守浩,张珂珂. 一种图智能应用开发平台及电信运营商应用实践. 邮电设计技术. 2024(10): 73-77 . 百度学术
30. 李鹏辉,翟正利,冯舒. 针对图神经网络的单节点扰动攻击. 计算机与数字工程. 2024(10): 3003-3008 . 百度学术
31. 孙秀娟,孙福振,李鹏程,王澳飞,王绍卿. 融合掩码自编码器的自适应增强序列推荐. 计算机科学与探索. 2024(12): 3324-3334 . 百度学术
32. 庞俊,程俊澳,林晓丽,王蒙湘. 基于动态超图小波神经网络的半监督超图节点分类. 计算机应用研究. 2024(12): 3735-3741 . 百度学术
33. 周宇,肖健梅,王锡淮. 基于GCN和HGP-SL的电力系统暂态稳定评估. 电气工程学报. 2024(04): 246-254 . 百度学术
34. 张蕾,钱峰,赵姝,陈洁,杨雪洁,张燕平. 基于卷积图神经网络的多粒度表示学习框架. 南京大学学报(自然科学). 2023(01): 43-54 . 百度学术
35. 李洁莹,马佳瑛. 英语翻译机器人翻译错误自动检测系统研究. 自动化与仪器仪表. 2023(02): 242-246 . 百度学术
36. 蒋玉英,陈心雨,李广明,王飞,葛宏义. 图神经网络及其在图像处理领域的研究进展. 计算机工程与应用. 2023(07): 15-30 . 百度学术
37. 陈东洋,郭进利. 基于图注意力的高阶网络节点分类方法. 计算机应用研究. 2023(04): 1095-1100+1136 . 百度学术
38. 韩冰,张鑫云,任爽. 基于三维点云的卷积运算综述. 计算机研究与发展. 2023(04): 873-902 . 本站查看
39. 马东岭,吴鼎辉,陈家阁,姚国标,毛力波. 基于增强图注意力网络的高光谱影像分类方法. 山东建筑大学学报. 2023(02): 97-104 . 百度学术
40. 尹拓凯,岳文静,陈志. 面向拜占庭攻击的认知用户分类. 计算机技术与发展. 2023(04): 102-107 . 百度学术
41. 安波. 结构信息增强的文献分类方法研究. 农业图书情报学报. 2023(03): 15-24 . 百度学术
42. 代祖华,刘园园,狄世龙. 语义增强的图神经网络方面级文本情感分析. 计算机工程. 2023(06): 71-80 . 百度学术
43. 袁满,褚润夫,袁靖舒,陈萍. 融合上下文信息的图神经网络推荐模型研究. 吉林大学学报(信息科学版). 2023(04): 693-700 . 百度学术
44. 刘佰阳,郑宇,魏琳,刘梅,金龙. 面向模型未知的冗余机器人运动规划方案. 兰州大学学报(自然科学版). 2023(04): 506-511 . 百度学术
45. 梁龙跃,王浩竹. 基于图卷积神经网络的个人信用风险预测. 计算机工程与应用. 2023(17): 275-285 . 百度学术
46. 陈淑娴. 基于知识图谱与图神经网络下无线业务预测手段优化. 软件. 2023(07): 83-85 . 百度学术
47. 马华,姜伟,陈明,钟世杰. 基于图滤波器的符号属性图链路关系预测算法. 计算机技术与发展. 2023(09): 126-132 . 百度学术
48. 柳博文,刘星. 多尺度卷积神经网络模型优化在矿物识别中的应用. 矿物岩石. 2023(03): 10-19 . 百度学术
49. 田春生,陈雷,王源,王硕,周婧,王卓立,庞永江,杜忠. 基于图神经网络的电子设计自动化技术研究进展. 电子与信息学报. 2023(09): 3069-3082 . 百度学术
50. 张华辉,邱晓莹,徐航. 文本情感分类方法研究综述. 延边大学学报(自然科学版). 2023(03): 275-282 . 百度学术
51. 谷振宇,陈聪,郑家佳,孙棣华. 考虑时空相似性的动态图卷积神经网络交通流预测. 控制与决策. 2023(12): 3399-3408 . 百度学术
52. 王松,骆莹,刘新民. 基于文本语义与关联网络双链路融合的用户生成内容价值早期识别研究. 数据分析与知识发现. 2023(11): 101-113 . 百度学术
53. 曹汉童,陈璟. 融合Doc2vec与GCN的多类型蛋白质相互作用预测方法. 智能系统学报. 2023(06): 1165-1172 . 百度学术
54. 丁红发,傅培旺,彭长根,龙士工,吴宁博. 混洗差分隐私保护的度分布直方图发布算法. 西安电子科技大学学报. 2023(06): 219-236 . 百度学术
55. 马汉达,梁文德. 基于左归一化图卷积网络的推荐模型. 计算机应用. 2023(S2): 111-116 . 百度学术
56. 闫明路,连航宇,朱丹青,程平. 图计算在反洗钱领域的应用. 金融会计. 2023(10): 65-72 . 百度学术
57. 刘俊奇. 联合编码属性图聚类算法研究. 信息记录材料. 2022(04): 176-178 . 百度学术
58. 熊晗. 图神经网络的开发与应用研究. 电视技术. 2022(04): 142-145 . 百度学术
59. 杜雨晅,王巍,张闯,郑小丽,苏嘉涛,王杨洋. 基于自适应图卷积注意力神经协同推荐算法. 计算机应用研究. 2022(06): 1760-1766 . 百度学术
60. 任嘉睿,张海燕,朱梦涵,马波. 基于元图卷积的异质网络嵌入学习算法. 计算机研究与发展. 2022(08): 1683-1693 . 本站查看
61. 徐上上,孙福振,王绍卿,董家玮,吴田慧. 基于图神经网络的异构信任推荐算法. 计算机工程. 2022(09): 89-95+104 . 百度学术
62. 张博,宋淑彩,赵一航. 基于GCN的节点分类研究. 河北建筑工程学院学报. 2022(02): 196-200 . 百度学术
63. 卢锦玲,周阳,颜禄涵,张艺萱. 基于残差时空图神经网络的电力系统暂态稳定评估. 电力科学与工程. 2022(09): 54-64 . 百度学术
64. 罗袆沅,蒋亚楠,许强,廖露,燕翱翔,刘陈伟. 基于深度学习的滑坡位移时空预测. 测绘学报. 2022(10): 2160-2170 . 百度学术
65. 邢小雷,赵超,郑江文,温可欣. 图神经网络预训练综述. 小型微型计算机系统. 2022(12): 2487-2498 . 百度学术
66. 闫如雪,余泽霖,刘泽宇. 基于高效层级学习图卷积网络的高光谱图像分类. 电脑与信息技术. 2022(06): 15-17+50 . 百度学术
67. 邹长宽,田小平,张晓燕,张雨晴,杜磊. 基于GraphSage节点度重要性聚合的网络节点分类研究. 科学技术与工程. 2022(32): 14306-14312 . 百度学术
68. 王宏朝,李魏琦,郭耀华,刘瀛,焦秀秀. 基于安全监测公共服务平台的企业供应链安全风险预控方法研究. 物流科技. 2022(20): 28-32 . 百度学术
其他类型引用(305)