• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Yang Jieyi, Dong Yihong, Qian Jiangbo. Research Progress of Few-Shot Learning Methods Based on Graph Neural Networks[J]. Journal of Computer Research and Development, 2024, 61(4): 856-876. DOI: 10.7544/issn1000-1239.202220933
Citation: Yang Jieyi, Dong Yihong, Qian Jiangbo. Research Progress of Few-Shot Learning Methods Based on Graph Neural Networks[J]. Journal of Computer Research and Development, 2024, 61(4): 856-876. DOI: 10.7544/issn1000-1239.202220933

Research Progress of Few-Shot Learning Methods Based on Graph Neural Networks

Funds: This work was supported by the National Natural Science Foundation of China (62271274), the Natural Science Foundation of Ningbo (2023J114), and the public welfare Technology Research project of Ningbo (2023S023).
More Information
  • Author Bio:

    Yang Jieyi: born in 1999. Master candidate. Student member of CCF. Her main research interests include few-shot learning, graph neural network, and machine learning

    Dong Yihong: born in 1969. PhD, professor, master supervisor. Member of CCF. His main research interests include big data, data mining, and artificial intelligence

    Qian Jiangbo: born in 1974. PhD, professor, PhD supervisor. Senior member of CCF. His main research interests include machine learning, pattern recognition, and intelligent systems

  • Received Date: November 10, 2022
  • Revised Date: May 15, 2023
  • Available Online: November 13, 2023
  • Few-shot learning (FSL) aims to learn to get a problem-solving model using a small number of samples. Under the trend of training models with big data, deep learning has gained success in many fields, but realistic scenarios often lack sufficient samples or labeled samples. Therefore, FSL becomes a promising research direction at present. Graph neural networks (GNN) have attracted great attention due to their excellent performance in many applications. In view of this, many methods try to use GNN for FSL. Currently there are few review researches related to FSL methods based on GNN, and there is a lack of division system and introductory work on this type of methods. We systematically compose the current work related to FSL based on GNN. The work outlines the basis and concepts of graph methods for FSL, broadly classifies them into four categories of methods based on node-based feature, edge-based feature, node-pair-based feature and class-level-based feature according to the basic ideas of the models. The research progress of the four methods is introduced as well. Then the experimental results of the commonly used few-shot datasets and representative models on these datasets are summarized, as well as the advantages and disadvantages of each type of methods. Finally, current status and challenges of the graph methods for FSL are introduced, and their future directions are prospected.

  • [1]
    葛轶洲,刘恒,王言,等. 小样本困境下的深度学习图像识别综述[J]. 软件学报,2022,33(1):193−210 doi: 10.13328/j.cnki.jos.006342

    Ge Yizhou, Liu Heng, Wang Yan, et al. Survey on deep learning image recognition in dilemma of small samples[J]. Journal of Software, 2022, 33(1): 193−210 (in Chinese) doi: 10.13328/j.cnki.jos.006342
    [2]
    李凡长,刘洋,吴鹏翔,等. 元学习研究综述[J]. 计算机学报,2021,44(2):422−446 doi: 10.11897/SP.J.1016.2021.00422

    Li Fanchang, Liu Yang, Wu Pengxiang, et al. A survey on recent advance in meta-learning[J]. Chinese Journal of Computers, 2021, 44(2): 422−446 (in Chinese) doi: 10.11897/SP.J.1016.2021.00422
    [3]
    Sun Qianru, Liu Yaoyao, Chua T, et al. Meta-transfer learning for few-shot learning [C] //Proc of the 32nd IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 403−412
    [4]
    Wang Peng, Liu Lingqiao, Shen Chunhua, et al. Multi-attention network for one shot learning [C] //Proc of the 30th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2017: 2721−2729
    [5]
    Khodadadeh S, Boloni L, Shan M. Unsupervised meta-learning for few-shot image classification [C] //Proc of the 33rd Annual Conf on Neural Information-Processing Systems. Cambrige, MA: MIT, 2019: 10132−10142
    [6]
    赵凯琳,靳小龙,王元卓. 小样本学习研究综述[J]. 软件学报,2021,32(2):349−369 doi: 10.13328/j.cnki.jos.006138

    Zhao Kailin, Jin Xiaolong, Wang Yuanzhuo. Survey on few-shot learning[J]. Journal of Software, 2021, 32(2): 349−369 (in Chinese) doi: 10.13328/j.cnki.jos.006138
    [7]
    Howard J, Ruder S. Universal language model fine-tuning for text classification[J]. arXiv preprint, arXiv: 1801. 06146, 2018
    [8]
    Royle J, Dorazio R, Link W. Analysis of multinomial models with unknown index using data augmentation[J]. Journal of Computational and Graphical Statistics, 2007, 16(1): 67−85 doi: 10.1198/106186007X181425
    [9]
    Liu Zicheng, Li Siyuan, Wu Di, et al. AutoMix: Unveiling the power of mixup for stronger classifiers [C] //Proc of the 17th European Conf on Computer Vision. Berlin: Springer, 2022: 441−458
    [10]
    Kipf T, Welling M. Semi-supervised classification with graph convolutional networks[J]. arXiv preprint, arXiv: 1609. 02907, 2016
    [11]
    Blaes S, Burwick T. Few-shot learning in deep networks through global prototyping[J]. Neural Networks, 2017, 94: 159−172 doi: 10.1016/j.neunet.2017.07.001
    [12]
    Rahman S, Khan S, Porikli F. A unified approach for conventional zero-shot, generalized zero-shot, and few-shot learning[J]. IEEE Transactions on Image Processing, 2018, 27((11): ): 5652−5267 doi: 10.1109/TIP.2018.2861573
    [13]
    Cheng Yu, Yu Mo, Guo Xiaoxiao, et al. Few-shot learning with meta metric learners[J]. arXiv preprint, arXiv: 1901. 09890, 2019
    [14]
    马帅,刘建伟,左信. 图神经网络综述[J]. 计算机研究与发展,2022,59(1):47−80

    Ma Shuai, Liu Jianwei, Zuo Xin. Survey of graph neural networks[J]. Journal of Computer Research and Development, 2022, 59(1): 47−80 (in Chinese)
    [15]
    谢小杰,梁英,王梓森,等. 基于图卷积的异质网络节点分类方法[J]. 计算机研究与发展,2022,59(7):1470−1485

    Xie Xiaojie, Liang Ying, Wang Zisen, et al. Node classification method for heterogeneous networks based on graph convolution[J]. Journal of Computer Research and Development, 2022, 59(7): 1470−1485 (in Chinese)
    [16]
    任嘉睿,张海燕,朱梦涵,等. 基于元图卷积的异质网络嵌入学习算法[J]. 计算机研究与发展,2022,59(8):1683−1693

    Ren Jiarui, Zhang Haiyan, Zhu Menghan, et al. Embedding learning algorithm for heterogeneous networks based on metagram convolution[J]. Journal of Computer Research and Development, 2022, 59(8): 1683−1693 (in Chinese)
    [17]
    Xu Shiyao, Xiang Yang. Frog-GNN: Multi-perspective aggregation based graph neural network for few-shot text classification[J]. Expert Systems with Applications, 2021, 176: 114795 doi: 10.1016/j.eswa.2021.114795
    [18]
    Lyu Chen, Liu Weijie, Wang Ping. Few-shot text classification with edge-labeling graph neural network-based prototypical network [C] //Proc of the 28th Int Conf on Computational Linguistics. Stroudsburg, PA: ACL, 2020: 5547−552
    [19]
    Ding Ling, Chen Xiaojun, Xiang Yang. Negative-supervised capsule graph neural network for few-shot text classification[J]. Journal of Intelligent & Fuzzy Systems, 2021, 41(6): 6875−6887
    [20]
    Zhang Yuxiang, Li Wei, Zhang Mengmeng, et al. Dual graph cross-domain few-shot learning for hyperspectral image classification [C] //Proc of the 47th IEEE Int Conf on Acoustics Speech and Signal Processing. Piscataway, NJ: IEEE, 2022: 3573−3577
    [21]
    Song Xiaofa, Mao Mingyi, Qian Xiaohua. Auto-metric graph neural network based on a meta-learning strategy for the diagnosis of Alzheimer’s disease[J]. IEEE Journal of Biomedical and Health Informatics, 2021, 25(8): 3141−3152 doi: 10.1109/JBHI.2021.3053568
    [22]
    Xiong chao, Li wen, Wang Minghui, et al. Multi-dimensional edge features graph neural network on few-shot image classification[J]. IEEE Signal Processing Letters, 2021, 28: 573−577 doi: 10.1016/j.ymeth.2021.10.005
    [23]
    刘颖,雷研博,范九伦,等. 基于小样本学习的图像分类技术综述[J]. 自动化学报,2021,47(2):297−315 doi: 10.16383/j.aas.c190720

    Liu Ying, Lei Yanbo, Fan Jiulun, et al. Survey on image classification technology based on small sample learning[J]. Acta Automatica Sinica, 2021, 47(2): 297−315 (in Chinese) doi: 10.16383/j.aas.c190720
    [24]
    徐冰冰,岑科廷,黄俊杰,等. 图卷积神经网络综述[J]. 计算机学报,2020,43(5):755−780

    Xu Bingbing, Cen Keting, Huang Junjie, et al. A survey on graph convolutional neural networks[J]. Chinese Journal of Computers, 2020, 43(5): 755−780 (in Chinese)
    [25]
    Nakamura A, Harada T. Revisiting fine-tuning for few-shot learning[J]. arXiv preprint, arXiv: 1910. 00216, 2019
    [26]
    Dhillon G S, Ghaudhari P, Ravichandran A, et al. A baseline for few-shot image classification [C/OL] //Proc of the 8th Int Conf on Learning Representation. Amsterdam: Elsevier, 2020 [2023-03-26].https://openreview.net/forum?id=rylXBkrYDS
    [27]
    Wang Y, Girshick R, Hebert M, et al. Low-shot learning from imaginary data [C] //Proc of the 30th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 7278−7286
    [28]
    Liu Bo, Wang Xudong, Dixit M, et al. Feature space transfer for data augmentation [C] //Proc of the 30th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2018: 9090−9098
    [29]
    Jing Kunlei, Zhang Xinman, Yang Zhiyuan, et al. Feature augmentation learning for few-shot palmprint image recognition with unconstrained acquisition [C] //Proc of the 47th IEEE Int Conf on Acoustics Speech and Signal Processing. Piscataway, NJ: IEEE, 2022: 3323−3327
    [30]
    蒋留兵,周小龙,姜风伟,等. 基于改进匹配网络的单样本学习[J]. 系统工程与电子技术,2019,41(6):1210−1217 doi: 10.3969/j.issn.1001506X.2019.06.06

    Jiang Liubing, Zhou Xiaolong, Jiang Fengwei, et al. One-shot learning based on improved matching network[J]. Systems Engineering and Electronics, 2019, 41(6): 1210−1217 (in Chinese) doi: 10.3969/j.issn.1001506X.2019.06.06
    [31]
    Snell J, Swersky K, Zemel R. Prototypical networks for few-shot learning [C] //Proc of the 31st Annual Conf on Neural InformationProcessing Systems. Cambrige, MA: MIT, 2017: 4077−4087
    [32]
    Li Wenbin, Xu Jinglin, Huo Jing, et al. Distribution consistency based covariance metric networks for few-shot learning [C] //Proc of the 33rd AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2019: 8642−8649
    [33]
    Jiang Wen, Huang Kai, Geng Jie, et al. Multi-scale metric learning for few-shot learning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2020, 31(3): 1091−1102
    [34]
    Tian Yingjie, Zhao Xiaoxi, Huang Wei. Meta-learning approaches for learning-to-learn in deep learning: A survey [J]. Neurocomputing, 2022, 494: 203−223

    Tian Yingjie,Zhao Xiaoxi,Huang Wei. Meta-learning approaches for learning-to-learn in deep learning:A survey [J]. Neurocomputing,2022,494:203−223
    [35]
    李维刚,甘平,谢璐,等. 基于样本对元学习的小样本图像分类方法[J]. 电子学报,2022,50(2):295−304 doi: 10.12263/DZXB.20210453

    Li Weigang, Gan Ping, Xie Lu, et al. A few-shot image classification method by pairwise-based meta learning[J]. Acta Electronica Sinica, 2022, 50(2): 295−304 (in Chinese) doi: 10.12263/DZXB.20210453
    [36]
    Ravi S, Larochelle H. Optimization as a model for few-shot learning [C/OL] //Proc of the 4th Int Conf on Learning Representation. Amsterdam: Elsevier, 2016 [2023-03-26].https://openreview.net/pdf?id=rJY0-Kcll
    [37]
    Finn C, Abbeel P, Levine S. Model-agnostic meta-learning for fast adaptation of deep networks [C] //Proc of the 34th ACM Int Conf on Machine Learning. New York: ACM, 2017: 1126−1135
    [38]
    Li Zhenguo, Zhuo Fengwei, Chen Fei, et al. Meta-SGD: Learning to learn quickly for few-shot learning[J]. arXiv preprint, arXiv: 1707. 09835, 2017
    [39]
    Vinyals O, Blundell C, Lilicrap T, et al. Matching networks for one shot learning [C] //Proc of the 30th Annual Conf on Neural Information Processing Systems. Cambrige, MA: MIT, 2016: 3630−3638
    [40]
    Santoro A, Bartunov S, Botvinick M, et al. Meta-learning with memory-augmented neural networks [C] //Proc of the 33rd ACM Int Conf on Machine Learning. New York: ACM, 2016: 1842−1850
    [41]
    Brendel W, Bethge M. Approximating CNNs with bag-of-local-features models works surprisingly well on ImageNet [C/OL] //Proc of the 7th Int Conf on Learning Representation. Amsterdam: Elsevier, 2019 [2023-03-26].https://openreview.net/forum?id=rJY0-Kcll
    [42]
    Lai K, Zha Daochen, Zhou Kaixiong, et al. Policy-GNN: Aggregation optimization for graph neural networks [C] //Proc of the 26th ACM SIGKDD Conf on Knowledge Discovery and Data Mining. New York: ACM, 2020: 461−471
    [43]
    Garcia V, Bruna J. Few-shot learning with graph neural networks [C/OL] //Proc of the 6th Int Conf on Learning Representation. Amsterdam: Elsevier, 2018 [2023-03-26].https://openreview.net/pdf ?id=BJj6qGbRW
    [44]
    Liu Yanbin, Lee J, Park M, et al. Learning to propagate labels: Transductive propagation network for few-shot learning [C/OL] //Proc of the 7th Int Conf on Learning Representation. Amsterdam: Elsevier, 2019 [2023-03-26].https://openreview.net/pdf?id=SyVuRiC5K7
    [45]
    Qiao Limeng, Shi Yemin, Li Jia, et al. Transductive episodic-wise adaptive metric for few-shot learning [C] //Proc of the 11th IEEE Int Conf on Computer Vision. Piscataway, NJ: IEEE, 2019: 3602−3611
    [46]
    Angluin D, Smith C. Inductive inference: Theory and methods[J]. ACM Computing Surveys, 1983, 15(3): 237−269 doi: 10.1145/356914.356918
    [47]
    Chen Jinyin, Lin Xiang, Xiong Hui, et al. Smoothing adversarial training for GNN[J]. IEEE Transactions on Computational Social Systems, 2020, 8(3): 618−629
    [48]
    Cheng Hao, Zhou J, Tay W, et al. Attentive graph neural networks for few-shot learning [C] //Proc of the 5th IEEE Int Conf on Multimedia Information Processing and Retrieval. Piscataway, NJ: IEEE, 2022: 152−157
    [49]
    Velickovic P, Cucurull G, Casanova A, et al. Graph attention networks[J]. arXiv preprint, arXiv: 1710. 10903, 2017
    [50]
    Zhang Xu, Zhang Youjia, Zhang Zuyu, et al. Discriminative learning of imaginary data for few-shot classification [J]. Neurocomputing, 2022, 467: 406−417

    Zhang Xu,Zhang Youjia,Zhang Zuyu,et al. Discriminative learning of imaginary data for few-shot classification [J]. Neurocomputing,2022,467:406−417
    [51]
    Xu Rui, Liu Baodi, Lu Xiaoping, et al. DMH-FSL: Dual-modal hypergraph for few-shot learning[J]. Neural Processing Letters, 2022, 54(2): 1317−1332 doi: 10.1007/s11063-021-10684-7
    [52]
    Yadati N, Nimishakavi M, Yadav P, et al. HyperGCN: A new method for training graph convolutional networks on hypergraphs [C] //Proc of the 33rd Annual Conf on Neural Information Processing Systems. Cambrige, MA: MIT, 2019: 1509−1520
    [53]
    Kim J, Kim T, Kim S, et al. Edge-labeling graph neural network for few-shot learning [C] //Proc of the 32nd IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2019: 11−20
    [54]
    Zuo Xibing, Yu Xuchun, Liu Bing, et al. FSL-EGNN: Edge-labeling graph neural network for hyperspectral image few-shot classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2022, 60: 1−18
    [55]
    Xiong Chao, Li Wen, Liu Yun, et al. Multi-dimensional edge features graph neural network on few-shot image classification[J]. IEEE Signal Processing Letters, 2021, 28: 573−577 doi: 10.1109/LSP.2021.3061978
    [56]
    Tang Shixiang, Chen Dapeng, Bai Lei, et al. Mutual CRF-GNN for few-shot learning [C] //Proc of the 34th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 2329−2339
    [57]
    Bale T, Vale W. CRF and CRF receptors: Role in stress responsivity and other behaviors [J]. Annual Review of Pharmacology and Toxicology, 2004, 44: 525−557

    Bale T,Vale W. CRF and CRF receptors:Role in stress responsivity and other behaviors [J]. Annual Review of Pharmacology and Toxicology,2004,44:525−557
    [58]
    Yang Lin, Li Liangliang, Zhang Zilun, et al. DPGNN: Distribution propagation graph network for few-shot learning [C] //Proc of the 33rd IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2020: 13387−13396
    [59]
    Zhang Bailin, Ling Hefei, Shen Jialie, et al. Mixture distribution graph network for few shot learning[J]. IEEE Transactions on Cognitive and Developmental Systems, 2022, 14(3): 892−901 doi: 10.1109/TCDS.2021.3075280
    [60]
    Hu Yufan, Gao Junyu, Xu Changsheng. Learning dual-pooling graph neural networks for few-shot video classification[J]. IEEE Transactions on Multimedia, 2020, 23: 4285−4296
    [61]
    Mordeson J. Fuzzy mathematics [M]. Foundations of Image Understanding. Berlin: Springer, 2001: 95−125
    [62]
    Wei Tong, Hou Junlin, Feng Rui. Fuzzy graph neural network for few-shot learning [C/OL] //Proc of the 30th IEEE Int Joint Conf on Neural Networks. Piscataway, NJ: IEEE, 2020 [2023-03-26].https://openreview.net/pdf?id=BJj6qGbRW
    [63]
    Devlin J, Chang M, Lee K, et al. Bert: Pre-training of deep bidirectional transformers for language understanding [C] //Proc of the 17th Annual Conf of the North American chapter of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2019: 4171−4186
    [64]
    Ma Yuqing, Bai Shihao, An Shan, et al. Transductive relation-propagation network for few-shot Learning [C] //Proc of the 29th Int Joint Conf on Artificial Intelligence. San Francisco, CA: Morgan Kaufmann, 2020: 804−810
    [65]
    Ma Yuqing, Bai Shihao, Liu Wei, et al. Transductive relation-propagation with decoupling training for few-shot learning[J]. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(11): 6652−6664 doi: 10.1109/TNNLS.2021.3082928
    [66]
    Chen Cen, Li Kenli, Wei Wei, et al. Hierarchical graph neural networks for few-shot learning[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2021, 32(1): 240−252
    [67]
    Chen Chaofan, Yang Xiaoshan, Xu Changsheng, et al. ECKPN: Explicit class knowledge propagation network for transductive few-shot learning [C] //Proc of the 34th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2021: 6596−6605
    [68]
    Pennington J, Socher R, Manning C. Glove: Global vectors for word representation [C] //Proc of the 28th Int Conf on Empirical Methods in Natural Language Processing. Stroudsburg, PA: ACL, 2014: 1532−1543
    [69]
    Yu Tianyuan, He Sen, Song Yizhe, et al. Hybrid graph neural networks for few-shot learning [C] //Proc of the 36th AAAI Conf on Artificial Intelligence. Palo Alto, CA: AAAI, 2022: 3179−3187
    [70]
    Deng Fei, Pu Shengliang, Chen Xuehong, et al. Hyperspectral image classification with capsule network using limited training samples[J]. Sensors, 2018, 18(9): 3153−3174 doi: 10.3390/s18093153
    [71]
    Zhang Ruiheng, Yang Shuo, Zhang Qi, et al. Graph-based few-shot learning with transformed feature propagation and optimal class allocation[J]. Neurocomputing, 2022, 470: 247−256 doi: 10.1016/j.neucom.2021.10.110
    [72]
    Tantawi A, Towsley G, Wolf J. Optimal allocation of multiple class resources in computer systems [C] //Proc of the 16th ACM SIGMETRICS Conf on Measurement and Modeling of Computer Systems. New York: ACM, 1988: 253−260
    [73]
    Cuturi M. Sinkhorn distances: Lightspeed computation of optimal transport [C] //Proc of the 27th Annual Conf on Neural Information Processing Systems. Cambrige, MA: MIT, 2013: 2292−2300
    [74]
    Malalur P, Jaakkola T. Alignment based matching networks for one-shot classification and open-set recognition[J]. arXiv preprint, arXiv: 1903. 06538, 2019
    [75]
    Ren M, Triantafillou E, Ravi S, et al. Meta-learning for semi-supervised few-shot classification [C/OL]//Proc of the 6th Int Conf on Learning Representation. Amsterdam: Elsevier, 2018 [2023-03-26].https://openreview.net/pdf?id=HJcSzz-CZ
    [76]
    Cui Yin , Zhou Feng, Lin Yuanqing, et al. Fine-grained categorization and dataset bootstrapping using deep metric learning with humans in the loop [C] //Proc of the 29th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2016: 1153−1162
    [77]
    Singla S, Singla S, Feizi S. Improved deterministic l2 robustness on CIFAR-10 and CIFAR-100 [C/OL] //Proc of the 10th Int Conf on Learning Representation. Amsterdam: Elsevier, 2022 [2023-03-26].https://openreview.net/forum?id=tD7eCtaSkR
    [78]
    Deng J, Dong W, Socher R, et al. ImageNet: A large-scale hierarchical image database [C] //Proc of the 22nd IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2009: 248−255
    [79]
    He Kaiming, Zhang Xianyu, Ren Shaoqing, et al. Deep residual learning for image recognition [C] //Proc of the 29th IEEE Conf on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2016: 770−778
    [80]
    Lin Guangfeng, Yang Ying, Fan Yindi, et al. High-order structure preserving graph neural network for few-shot learning[J]. arXiv preprint, arXiv: 2005. 14415, 2020
    [81]
    Wang Yaqing, Yao Quanming, Kwok J, et al. Generalizing from a few examples: A survey on few-shot learning[J]. ACM Computing Surveys, 2020, 53(3): 1−34
    [82]
    Bendre N, Marin H, Najafirad P. Learning from few samples: A survey[J]. arXiv preprint, arXiv: 2007. 15484, 2020
    [83]
    Li Na, Zhou Deyun, Shi Jiao, et al. Graph-based deep multitask few-shot learning for hyperspectral image classification[J]. Remote Sensing, 2022, 14(9): 2246−2267 doi: 10.3390/rs14092246
    [84]
    Tong Xinyi, Yin Jihao, Han Bingnan, et al. Few-shot learning with attention-weighted graph convolutional networks for hyperspectral image classification [C] //Proc of the 27th IEEE Int Conf on Image Processing. Piscataway, NJ: IEEE, 2020: 1686−1690
    [85]
    Hong Danfeng, Gao Lianru, Yao Jing, et al. Graph convolutional networks for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2021, 59(7): 5966−5978 doi: 10.1109/TGRS.2020.3015157
    [86]
    Xiao Weiwei, Song Kechen, Liu Jie, et al. Graph embedding and optimal transport for few-shot classification of metal surface defect[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1−10
    [87]
    Bao Yanqi, Song Kechen, Liu Jie, et al. Triplet-graph reasoning network for few-shot metal generic surface defect segmentation[J]. IEEE Transactions on Instrumentation and Measurement, 2021, 70: 1−11
    [88]
    Li Feimo, Li Shuaibo, Fan Xinxin, et al. Structural attention enhanced continual meta-learning for graph edge labeling based few-shot remote sensing scene classification[J]. Remote Sensing, 2022, 14(3): 485−515 doi: 10.3390/rs14030485
    [89]
    Yuan Zhengwu, Huang Wendong, Tang Chan, et al. Graph-based embedding smoothing network for few-shot scene classification of remote sensing images[J]. Remote Sensing, 2022, 14(5): 1161−1179 doi: 10.3390/rs14051161
    [90]
    Guo Xinyu, Tian Bingjie, Tian Xuedong. HFGNN-proto: Hesitant fuzzy graph neural network-based prototypical network for few-shot text classification[J]. Electronics, 2022, 11(15): 2423−2437 doi: 10.3390/electronics11152423
    [91]
    Ma Ning, Bu Jiajun, Yang Jieyu, et al. Adaptive-step graph meta-learner for few-shot graph classification [C] //Proc of the 29th ACM Int Conf on Information & Knowledge Management. New York: ACM, 2020: 1055−1064
    [92]
    Hospedales T, Antoniou A, Micaelli P, et al. Meta-learning in neural networks: A survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(9): 5149−5169
    [93]
    Pan Jiacheng, Lin Haocai, Dong Yihong, et al. MAMF-GCN: Multi-scale adaptive multi-channel fusion deep graph convolutional network for predicting mental disorder[J]. Computers in Biology and Medicine, 2022, 148: 105823 doi: 10.1016/j.compbiomed.2022.105823
    [94]
    Wei Yinwei, Wang Xiang, Nie Liqiang, et al. MMGCN: Multi-modal graph convolution network for personalized recommendation of micro-video [C] //Proc of the 27th ACM Int Conf on Multimedia. New York: ACM, 2019: 1437−144
    [95]
    Tao Zhulin, Wei Yinwei, Wang Xiang, et al. MGAT: Multimodal graph attention network for recommendation[J]. Information Processing & Management, 2020, 57(5): 102277
    [96]
    Hsu K, Levine S, Finn C. Unsupervised learning via meta-learning [C/OL] //Proc of the 7th Int Conf on Learning Representation. Amsterdam: Elsevier, 2018 [2023-03-26].https://openreview.net/forum?id=r1My6sR9tX
    [97]
    Antoniou A, Storkey A. Assume, augment and learn: Unsupervised few-shot meta-learning via random labels and data augmentation[J]. arXiv preprint, arXiv: 1902. 09884, 2019
    [98]
    Veeriah V, Hessel M, Xu Zhongwen, et al. Discovery of useful questions as auxiliary tasks [C] // Proc of the 33rd Annual Conf on Neural Information Processing Systems. Cambrige, MA: MIT, 2019: 9306−9317
    [99]
    Zheng Zeyu, Oh J, Singh S. On learning intrinsic rewards for policy gradient methods [C] // Proc of the 32nd Annual Conf on Neural Information Processing Systems. Cambrige, MA: MIT, 2018: 4649−4659
    [100]
    Meier F, Kappler D, Schaal S. Online learning of a memory for learning rates [C] //Proc of the 4th IEEE Int Conf on Robotics and Automation. Piscataway, NJ: IEEE, 2018: 2425−2432
    [101]
    Tseng H, Lee H, Huang Jiabin, et al. Cross-domain few-shot classification via learned feature-wise transformation [C/OL] //Proc of the 9th Int Conf on Learning Representation. Amsterdam: Elsevier, 2020 [2023-03-26].https://openreview.net/forum?id=SJl5Np4tPr
    [102]
    Yun S, Jeong M, Kim R, et al. Graph transformer networks [C] // Proc of the 33rd Annual Conf on Neural Information Processing Systems. Cambrige, MA: MIT, 2019: 11960−11970
    [103]
    Jiang Bo, Zhao Kangkang, Tang jin, et al. RGTransformer: Region-graph transformer for image representation and few-shot classification[J]. IEEE Signal Processing Letters, 2022, 29: 792−796 doi: 10.1109/LSP.2022.3155991
  • Related Articles

    [1]Li Nan, Ding Yidong, Jiang Haoyu, Niu Jiafei, Yi Ping. Jailbreak Attack for Large Language Models: A Survey[J]. Journal of Computer Research and Development, 2024, 61(5): 1156-1181. DOI: 10.7544/issn1000-1239.202330962
    [2]Wang Mengru, Yao Yunzhi, Xi Zekun, Zhang Jintian, Wang Peng, Xu Ziwen, Zhang Ningyu. Safety Analysis of Large Model Content Generation Based on Knowledge Editing[J]. Journal of Computer Research and Development, 2024, 61(5): 1143-1155. DOI: 10.7544/issn1000-1239.202330965
    [3]Chen Xuanting, Ye Junjie, Zu Can, Xu Nuo, Gui Tao, Zhang Qi. Robustness of GPT Large Language Models on Natural Language Processing Tasks[J]. Journal of Computer Research and Development, 2024, 61(5): 1128-1142. DOI: 10.7544/issn1000-1239.202330801
    [4]Chen Huimin, Liu Zhiyuan, Sun Maosong. The Social Opportunities and Challenges in the Era of Large Language Models[J]. Journal of Computer Research and Development, 2024, 61(5): 1094-1103. DOI: 10.7544/issn1000-1239.202330700
    [5]Yang Yi, Li Ying, Chen Kai. Vulnerability Detection Methods Based on Natural Language Processing[J]. Journal of Computer Research and Development, 2022, 59(12): 2649-2666. DOI: 10.7544/issn1000-1239.20210627
    [6]Pan Xuan, Xu Sihan, Cai Xiangrui, Wen Yanlong, Yuan Xiaojie. Survey on Deep Learning Based Natural Language Interface to Database[J]. Journal of Computer Research and Development, 2021, 58(9): 1925-1950. DOI: 10.7544/issn1000-1239.2021.20200209
    [7]Zheng Haibin, Chen Jinyin, Zhang Yan, Zhang Xuhong, Ge Chunpeng, Liu Zhe, Ouyang Yike, Ji Shouling. Survey of Adversarial Attack, Defense and Robustness Analysis for Natural Language Processing[J]. Journal of Computer Research and Development, 2021, 58(8): 1727-1750. DOI: 10.7544/issn1000-1239.2021.20210304
    [8]Pan Xudong, Zhang Mi, Yan Yifan, Lu Yifan, Yang Min. Evaluating Privacy Risks of Deep Learning Based General-Purpose Language Models[J]. Journal of Computer Research and Development, 2021, 58(5): 1092-1105. DOI: 10.7544/issn1000-1239.2021.20200908
    [9]Bao Yang, Yang Zhibin, Yang Yongqiang, Xie Jian, Zhou Yong, Yue Tao, Huang Zhiqiu, Guo Peng. An Automated Approach to Generate SysML Models from Restricted Natural Language Requirements in Chinese[J]. Journal of Computer Research and Development, 2021, 58(4): 706-730. DOI: 10.7544/issn1000-1239.2021.20200757
    [10]Che Haiyan, Feng Tie, Zhang Jiachen, Chen Wei, and Li Dali. Automatic Knowledge Extraction from Chinese Natural Language Documents[J]. Journal of Computer Research and Development, 2013, 50(4): 834-842.
  • Cited by

    Periodical cited type(66)

    1. 袁良志,海佳丽,汪润,邓文萍,肖勇,常凯. 知识图谱驱动的中医药标准数字化探索与实践. 中医药导报. 2025(01): 225-230 .
    2. 范定容,王倩倩,沈奥,彭露. 从ChatGPT到Sora:人工智能在医学教育中的应用潜力与挑战. 中国医学教育技术. 2025(01): 33-40 .
    3. 刘园园,王银刚. ChatGPT影响大学生判断能力:双向机理与对策. 湖北成人教育学院学报. 2025(01): 29-34 .
    4. 魏昱,刘卫. 人工智能生成内容在服装设计中的应用现状. 毛纺科技. 2025(01): 134-142 .
    5. 李冰,鲜勇,雷刚,苏娟. ChatGPT架构下课程智能教学助手建设探讨. 教育教学论坛. 2025(03): 45-48 .
    6. 梁炜,许振宇. 大语言模型赋能舆情治理现代化:价值、风险与路径. 中国应急管理科学. 2025(01): 93-103 .
    7. 刘邦奇,聂小林,王士进,袁婷婷,朱洪军,赵子琪,朱广袤. 生成式人工智能与未来教育形态重塑:技术框架、能力特征及应用趋势. 电化教育研究. 2024(01): 13-20 .
    8. 秦涛,杜尚恒,常元元,王晨旭. ChatGPT的工作原理、关键技术及未来发展趋势. 西安交通大学学报. 2024(01): 1-12 .
    9. 张小朝. AIGC在商旅行业中的应用探索. 广东通信技术. 2024(01): 75-79 .
    10. 廉霄兴,宋勇,朱军,王淑玲,叶晓舟,欧阳晔. 基于双通道理论的通信认知增强技术研究. 电信科学. 2024(01): 123-135 .
    11. 杨永恒. 人工智能时代社会科学研究的“变”与“不变”. 人民论坛·学术前沿. 2024(04): 96-105 .
    12. 刘英祥,张琳. 生成式人工智能技术在海事管理工作中的应用探索. 航海. 2024(02): 62-64 .
    13. 吕静,何平,王永芬,冉朝霞,曹钦兴,古文帆,彭敏,田敏. ChatGPT在医学领域研究态势的文献计量学分析. 医学与哲学. 2024(07): 30-35 .
    14. 王益君,董韵美. 公众对人工智能的认知与情感态度——以ChatGPT为例. 知识管理论坛. 2024(01): 16-29 .
    15. 陈雷. ChatGPT在公安院校教育教学中的应用及影响. 太原城市职业技术学院学报. 2024(02): 85-88 .
    16. 尤冲,李彦兵. 基于ChatGPT大语言模型应用的公共体育服务智能化:指征、风险及其规制. 南京体育学院学报. 2024(02): 1-12 .
    17. 杨胜钦. 从ChatGPT看AI对电信网络诈骗犯罪治理的影响. 犯罪与改造研究. 2024(05): 26-33 .
    18. 王春英,姚亚妮,滕白莹. 生成式人工智能嵌入敏捷政府建设:影响、风险与应对. 北京行政学院学报. 2024(03): 73-83 .
    19. 王雯,李永智. 国际生成式人工智能教育应用与省思. 开放教育研究. 2024(03): 37-44 .
    20. 张智义. 体认语言学视阈下ChatGPT语言生成及性能研究. 外语研究. 2024(03): 20-25+43+112 .
    21. 余淑珍,单俊豪,闫寒冰. 情感计算赋能个性化教学:逻辑框架、问题解构与多元重塑. 现代远距离教育. 2024(02): 53-61 .
    22. 高尚. 大语言模型与中台:共融还是替代?. 科技与金融. 2024(05): 59-62 .
    23. 郭亚军,马慧芳,张鑫迪,冯思倩. ChatGPT赋能图书馆知识服务:原理、场景与进路. 图书馆建设. 2024(03): 60-68 .
    24. 高雪松,黄蕴华,王斌. 基于专利数据的生成式人工智能技术栈创新态势研究. 东北财经大学学报. 2024(04): 53-61 .
    25. 张渊. ChatGPT文本的生成机制与文本特性分析. 重庆文理学院学报(社会科学版). 2024(04): 105-114 .
    26. 罗仕鉴,于慧伶,易珮琦. 数智时代工业设计知识生产新范式. 机械设计. 2024(08): 6-10 .
    27. 徐炳文. 基于ChatGPT的人工智能交互技术工业物联网平台研究. 工业控制计算机. 2024(08): 132-134 .
    28. Deyi Li,Jialun Yin,Tianlei Zhang,Wei Han,Hong Bao. The Four Most Basic Elements In Machine Cognition. Data Intelligence. 2024(02): 297-319 .
    29. 黄语,刘海洋,常海军,杨远松. 基于ChatGPT工作模式的AI工具在BIM技术中的潜在应用与实现途径. 科技创新与应用. 2024(26): 181-184+188 .
    30. 李琳娜,丁楷,韩红旗,王力,李艾丹. 基于知识图谱的中文科技文献问答系统构建研究. 中国科技资源导刊. 2024(04): 51-62 .
    31. 裴炳森,李欣,蒋章涛,刘明帅. 基于大语言模型的公安专业小样本知识抽取方法研究. 计算机科学与探索. 2024(10): 2630-2642 .
    32. 李克寒,余丽媛,邵企能,蒋可,乌丹旦. 大语言模型在口腔住院医师规范化培训中的应用构想. 中国卫生产业. 2024(07): 155-158 .
    33. 钟厚涛. 生成式人工智能给翻译实践带来的机遇与挑战. 北京翻译. 2024(00): 238-250 .
    34. 张夏恒,马妍. AIGC在应急情报服务中的应用研究. 图书馆工作与研究. 2024(11): 60-67 .
    35. 崔金满,李冬梅,田萱,孟湘皓,杨宇,崔晓晖. 提示学习研究综述. 计算机工程与应用. 2024(23): 1-27 .
    36. 周代数,魏杉汀. 人工智能驱动的科学研究第五范式:演进、机制与影响. 中国科技论坛. 2024(12): 97-107 .
    37. 钱力,张智雄,伍大勇,常志军,于倩倩,胡懋地,刘熠. 科技文献大模型:方法、框架与应用. 中国图书馆学报. 2024(06): 45-58 .
    38. 潘崇佩,廖康启,孔勇发. 生成式人工智能背景下的近代物理实验教学改革. 实验室研究与探索. 2024(12): 117-122 .
    39. 李德毅,刘玉超,殷嘉伦. 认知机器如何创造. 中国基础科学. 2024(06): 1-11 .
    40. 李德毅,张天雷,韩威,海丹,鲍泓,高洪波. 认知机器的结构和激活. 智能系统学报. 2024(06): 1604-1613 .
    41. 蔡昌,庞思诚. ChatGPT的智能性及其在财税领域的应用. 商业会计. 2023(09): 41-46 .
    42. 于书娟,卢小雪,赵磊磊. 教育人工智能变革的基本逻辑与发展进路. 当代教育科学. 2023(05): 40-49 .
    43. 曹克亮. ChatGPT:意识形态家的机器学转向及后果. 统一战线学研究. 2023(04): 134-144 .
    44. 宋恺,屈蕾蕾,杨萌科. 生成式人工智能的治理策略研究. 信息通信技术与政策. 2023(07): 83-88 .
    45. 陈凌云,姚宽达,王茜,方安,李刚. ChatGPT:研究进展、模型创新及医学信息研究应用场景优化. 医学信息学杂志. 2023(07): 18-23+29 .
    46. 彭强,李羿卫. 自然用户界面在智能家居系统中的应用路径创新研究:生成式人工智能技术的调节作用. 包装工程. 2023(16): 454-463 .
    47. 杨军农,王少波. 类ChatGPT技术嵌入政务服务网的应用场景、风险隐患与实施建议. 信息与电脑(理论版). 2023(10): 183-186 .
    48. 政光景,吕鹏. 生成式人工智能与哲学社会科学新范式的涌现. 江海学刊. 2023(04): 132-142+256 .
    49. 吴梦妮. 社交媒体传播视域下玩具企业应用AI技术实施营销的实践路径. 玩具世界. 2023(04): 144-146 .
    50. 李德毅,殷嘉伦,张天雷,韩威,鲍泓. 机器认知四要素说. 中国基础科学. 2023(03): 1-10+22 .
    51. 王洁. ChatGPT对知识服务的五大变革. 图书馆. 2023(09): 10-16 .
    52. 刘乃嘉. 基于ChatGPT的矿山工程风险评估预警系统实现探讨. 企业科技与发展. 2023(08): 44-47 .
    53. 裴炳森,李欣,吴越. 基于ChatGPT的电信诈骗案件类型影响力评估. 计算机科学与探索. 2023(10): 2413-2425 .
    54. 张新新,丁靖佳. 生成式智能出版的技术原理与流程革新. 图书情报知识. 2023(05): 68-76 .
    55. 张新新,黄如花. 生成式智能出版的应用场景、风险挑战与调治路径. 图书情报知识. 2023(05): 77-86+27 .
    56. 陈靖. ChatGPT的类人想象与安全风险分析. 网络空间安全. 2023(04): 8-12 .
    57. 李佩芳,陈佳丽,宁宁,王立群,张涵旎. ChatGPT在医学领域的应用进展及思考. 华西医学. 2023(10): 1456-1460 .
    58. 朱敏锐,郜云帆,黄勇. 以新时代优良学风涵养新时代外语人才. 北京教育(高教). 2023(11): 35-37 .
    59. 丁红菊. 消解与重构:人工智能技术对新闻业的影响——基于对ChatGPT的研究. 运城学院学报. 2023(05): 57-62 .
    60. 李钥,淮盼盼,杨辉. ChatGPT在护理教育中的应用状况及优劣分析. 护理学杂志. 2023(21): 117-121 .
    61. 张绍龙. 基于ChatGPT的人工智能技术应用. 集成电路应用. 2023(11): 200-201 .
    62. 崔克克,孙冲,李辉,赵凌飞. 浅谈水泥企业数字化转型发展. 中国水泥. 2023(12): 28-33 .
    63. 单琳,王文娟,刘舒萌. ChatGPT在医学分子生物学教学中的应用. 基础医学教育. 2023(12): 1084-1086 .
    64. 李德毅,刘玉超,任璐. 人工智能看智慧. 科学与社会. 2023(04): 131-149 .
    65. 付翔,魏晓伟,张浩,徐宁. 数字安全角度下审视和剖析ChatGPT. 航空兵器. 2023(06): 117-122 .
    66. 黄婷,刘力凯. 基于大模型的数智化语言教学探索与应用. 连云港职业技术学院学报. 2023(04): 73-79 .

    Other cited types(0)

Catalog

    Article views (983) PDF downloads (408) Cited by(66)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return