• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Li Han, Yan Mingyu, Lü Zhengyang, Li Wenming, Ye Xiaochun, Fan Dongrui, Tang Zhimin. Survey on Graph Neural Network Acceleration Architectures[J]. Journal of Computer Research and Development, 2021, 58(6): 1204-1229. DOI: 10.7544/issn1000-1239.2021.20210166
Citation: Li Han, Yan Mingyu, Lü Zhengyang, Li Wenming, Ye Xiaochun, Fan Dongrui, Tang Zhimin. Survey on Graph Neural Network Acceleration Architectures[J]. Journal of Computer Research and Development, 2021, 58(6): 1204-1229. DOI: 10.7544/issn1000-1239.2021.20210166

Survey on Graph Neural Network Acceleration Architectures

Funds: This work was supported by the National Natural Science Foundation of China (61732018, 61872335, 61802367), the International Partnership Program of Chinese Academy of Sciences(171111KYSB20200002), and the Open Project Program of the State Key Laboratory of Mathematical Engineering and Advanced Computing (2019A07).
More Information
  • Published Date: May 31, 2021
  • Recently, the emerging graph neural networks (GNNs) have received extensive attention from academia and industry due to the powerful graph learning and reasoning capabilities, and are considered to be the core force that promotes the field of artificial intelligence into the “cognitive intelligence” stage. Since GNNs integrate the execution process of both traditional graph processing and neural network, a hybrid execution pattern naturally exists, which makes irregular and regular computation and memory access behaviors coexist. This execution pattern makes traditional processors and the existing graph processing and neural network acceleration architectures unable to cope with the two opposing execution behaviors at the same time, and cannot meet the acceleration requirements of GNNs. To solve the above problems, acceleration architectures tailored for GNNs continue to emerge. They customize computing hardware units and on-chip storage levels for GNNs, optimize computation and memory access behaviors, and have achieved acceleration effects well. Based on the challenges faced by the GNN acceleration architectures in the design process, this paper systematically analyzes and introduces the overall structure design and the key optimization technologies in this field from computation, on-chip memory access, off-chip memory access respectively. Finally, the future direction of GNN acceleration structure design is prospected from different angles, and it is expected to bring certain inspiration to researchers in this field.
  • Related Articles

    [1]Wei Zishu, Han Yue, Liu Sihao, Zhang Shengyu, Wu Fei. Lookahead Analysis and Discussion of Research Hotspots in Artificial Intelligence from 2021 to 2023[J]. Journal of Computer Research and Development, 2024, 61(5): 1261-1275. DOI: 10.7544/issn1000-1239.202440063
    [2]Liu Qixu, Liu Jiaxi, Jin Ze, Liu Xinyu, Xiao Juxin, Chen Yanhui, Zhu Hongwen, Tan Yaokang. Survey of Artificial Intelligence Based IoT Malware Detection[J]. Journal of Computer Research and Development, 2023, 60(10): 2234-2254. DOI: 10.7544/issn1000-1239.202330450
    [3]Wu Xinxin, Ou Yan, Li Wenming, Wang Da, Zhang Hao, Fan Dongrui. Acceleration of Sparse Convolutional Neural Network Based on Coarse-Grained Dataflow Architecture[J]. Journal of Computer Research and Development, 2021, 58(7): 1504-1517. DOI: 10.7544/issn1000-1239.2021.20200112
    [4]Wang Baonan, Hu Feng, Zhang Huanguo, Wang Chao. From Evolutionary Cryptography to Quantum Artificial Intelligent Cryptography[J]. Journal of Computer Research and Development, 2019, 56(10): 2112-2134. DOI: 10.7544/issn1000-1239.2019.20190374
    [5]Han Dong, Zhou Shengyuan, Zhi Tian, Chen Yunji, Chen Tianshi. A Survey of Artificial Intelligence Chip[J]. Journal of Computer Research and Development, 2019, 56(1): 7-22. DOI: 10.7544/issn1000-1239.2019.20180693
    [6]Chen Lili, Shen Li, Wang Zhiying, Xiao Nong, and Yao Yiping. Computation Accelerator Virtualization for Domain Specific Applications[J]. Journal of Computer Research and Development, 2011, 48(11): 2103-2110.
    [7]Xia Hui, Jia Zhiping, Zhang Feng, Li Xin, Chen Renhai, Edwin H.-M. Sha. The Research and Application of a Specific Instruction Processor for AES[J]. Journal of Computer Research and Development, 2011, 48(8): 1554-1562.
    [8]Zhu Yi, Huang Zhiqiu, Zhou Hang, Liu Linyuan. A Method for Generating Software Architecture Models from Process Algebra Specifications[J]. Journal of Computer Research and Development, 2011, 48(2): 241-250.
    [9]Li Yuqin and Zhao Wenyun. A Feature Oriented Approach to Mapping from Domain Requirements to Product Line Architecture[J]. Journal of Computer Research and Development, 2007, 44(7): 1236-1242.
    [10]Li Yong, Wang Zhiying, Zhao Xuemi, and Yue Hong. Design of Application Specific Instruction-Set Processors Directed by Configuration Stream Driven Computing Architecture[J]. Journal of Computer Research and Development, 2007, 44(4): 714-721.
  • Cited by

    Periodical cited type(7)

    1. 肖国庆,李雪琪,陈玥丹,唐卓,姜文君,李肯立. 大规模图神经网络研究综述. 计算机学报. 2024(01): 148-171 .
    2. 林晶晶,冶忠林,赵海兴,李卓然. 超图神经网络综述. 计算机研究与发展. 2024(02): 362-384 . 本站查看
    3. 谭会生,严舒琪,杨威. 时空图卷积网络的骨架识别硬件加速器设计. 电子测量技术. 2024(11): 36-43 .
    4. 唐渐,刘玉清. 改进傅里叶域转换的分子性质预测方法仿真. 计算机仿真. 2023(01): 505-509 .
    5. 陈曦,虞红芳,吴涛,刘玲,周攀,徐小琼,林鹏,尹翔宇,罗龙. 面向教学与科研场景的轻量级网络模拟仿真平台研发及应用实践. 西南民族大学学报(自然科学版). 2023(02): 173-179 .
    6. 王召朋,王瑞博,江先阳. 基于FPGA的MobileNet_SSD的高能效比加速器设计. 信息技术. 2023(12): 1-7 .
    7. 张瑾,朱桂祥,王宇琛,郑烁佳,陈镜潞. 基于异质图表达学习的跨境电商推荐模型. 电子与信息学报. 2022(11): 4008-4017 .

    Other cited types(17)

Catalog

    Article views (2055) PDF downloads (1661) Cited by(24)

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return