• 中国精品科技期刊
  • CCF推荐A类中文期刊
  • 计算领域高质量科技期刊T1类
Advanced Search
Li Han, Yan Mingyu, Lü Zhengyang, Li Wenming, Ye Xiaochun, Fan Dongrui, Tang Zhimin. Survey on Graph Neural Network Acceleration Architectures[J]. Journal of Computer Research and Development, 2021, 58(6): 1204-1229. DOI: 10.7544/issn1000-1239.2021.20210166
Citation: Li Han, Yan Mingyu, Lü Zhengyang, Li Wenming, Ye Xiaochun, Fan Dongrui, Tang Zhimin. Survey on Graph Neural Network Acceleration Architectures[J]. Journal of Computer Research and Development, 2021, 58(6): 1204-1229. DOI: 10.7544/issn1000-1239.2021.20210166

Survey on Graph Neural Network Acceleration Architectures

Funds: This work was supported by the National Natural Science Foundation of China (61732018, 61872335, 61802367), the International Partnership Program of Chinese Academy of Sciences(171111KYSB20200002), and the Open Project Program of the State Key Laboratory of Mathematical Engineering and Advanced Computing (2019A07).
More Information
  • Published Date: May 31, 2021
  • Recently, the emerging graph neural networks (GNNs) have received extensive attention from academia and industry due to the powerful graph learning and reasoning capabilities, and are considered to be the core force that promotes the field of artificial intelligence into the “cognitive intelligence” stage. Since GNNs integrate the execution process of both traditional graph processing and neural network, a hybrid execution pattern naturally exists, which makes irregular and regular computation and memory access behaviors coexist. This execution pattern makes traditional processors and the existing graph processing and neural network acceleration architectures unable to cope with the two opposing execution behaviors at the same time, and cannot meet the acceleration requirements of GNNs. To solve the above problems, acceleration architectures tailored for GNNs continue to emerge. They customize computing hardware units and on-chip storage levels for GNNs, optimize computation and memory access behaviors, and have achieved acceleration effects well. Based on the challenges faced by the GNN acceleration architectures in the design process, this paper systematically analyzes and introduces the overall structure design and the key optimization technologies in this field from computation, on-chip memory access, off-chip memory access respectively. Finally, the future direction of GNN acceleration structure design is prospected from different angles, and it is expected to bring certain inspiration to researchers in this field.
  • Cited by

    Periodical cited type(7)

    1. 肖国庆,李雪琪,陈玥丹,唐卓,姜文君,李肯立. 大规模图神经网络研究综述. 计算机学报. 2024(01): 148-171 .
    2. 林晶晶,冶忠林,赵海兴,李卓然. 超图神经网络综述. 计算机研究与发展. 2024(02): 362-384 . 本站查看
    3. 谭会生,严舒琪,杨威. 时空图卷积网络的骨架识别硬件加速器设计. 电子测量技术. 2024(11): 36-43 .
    4. 唐渐,刘玉清. 改进傅里叶域转换的分子性质预测方法仿真. 计算机仿真. 2023(01): 505-509 .
    5. 陈曦,虞红芳,吴涛,刘玲,周攀,徐小琼,林鹏,尹翔宇,罗龙. 面向教学与科研场景的轻量级网络模拟仿真平台研发及应用实践. 西南民族大学学报(自然科学版). 2023(02): 173-179 .
    6. 王召朋,王瑞博,江先阳. 基于FPGA的MobileNet_SSD的高能效比加速器设计. 信息技术. 2023(12): 1-7 .
    7. 张瑾,朱桂祥,王宇琛,郑烁佳,陈镜潞. 基于异质图表达学习的跨境电商推荐模型. 电子与信息学报. 2022(11): 4008-4017 .

    Other cited types(18)

Catalog

    Article views (2070) PDF downloads (1664) Cited by(25)
    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return