高级检索
    李涵, 严明玉, 吕征阳, 李文明, 叶笑春, 范东睿, 唐志敏. 图神经网络加速结构综述[J]. 计算机研究与发展, 2021, 58(6): 1204-1229. DOI: 10.7544/issn1000-1239.2021.20210166
    引用本文: 李涵, 严明玉, 吕征阳, 李文明, 叶笑春, 范东睿, 唐志敏. 图神经网络加速结构综述[J]. 计算机研究与发展, 2021, 58(6): 1204-1229. DOI: 10.7544/issn1000-1239.2021.20210166
    Li Han, Yan Mingyu, Lü Zhengyang, Li Wenming, Ye Xiaochun, Fan Dongrui, Tang Zhimin. Survey on Graph Neural Network Acceleration Architectures[J]. Journal of Computer Research and Development, 2021, 58(6): 1204-1229. DOI: 10.7544/issn1000-1239.2021.20210166
    Citation: Li Han, Yan Mingyu, Lü Zhengyang, Li Wenming, Ye Xiaochun, Fan Dongrui, Tang Zhimin. Survey on Graph Neural Network Acceleration Architectures[J]. Journal of Computer Research and Development, 2021, 58(6): 1204-1229. DOI: 10.7544/issn1000-1239.2021.20210166

    图神经网络加速结构综述

    Survey on Graph Neural Network Acceleration Architectures

    • 摘要: 近年来,新兴的图神经网络因其强大的图学习和推理能力,得到学术界和工业界的广泛关注,被认为是推动人工智能领域迈入“认知智能”阶段的核心力量.图神经网络融合传统图计算和神经网络的执行过程,形成了不规则与规则的计算和访存行为共存的混合执行模式.传统处理器结构设计以及面向图计算和神经网络的加速结构不能同时应对2种对立的执行行为,无法满足图神经网络的加速需求.为解决上述问题,面向图神经网络应用的专用加速结构不断涌现,它们为图神经网络定制计算硬件单元和片上存储层次,优化计算和访存行为,取得了良好的加速效果.以图神经网络执行行为带来的加速结构设计挑战为出发点,从整体结构设计以及计算、片上访存、片外访存层次对该领域的关键优化技术进行详实而系统地分析与介绍.最后还从不同角度对图神经网络加速结构设计的未来方向进行了展望,期望能为该领域的研究人员带来一定的启发.

       

      Abstract: Recently, the emerging graph neural networks (GNNs) have received extensive attention from academia and industry due to the powerful graph learning and reasoning capabilities, and are considered to be the core force that promotes the field of artificial intelligence into the “cognitive intelligence” stage. Since GNNs integrate the execution process of both traditional graph processing and neural network, a hybrid execution pattern naturally exists, which makes irregular and regular computation and memory access behaviors coexist. This execution pattern makes traditional processors and the existing graph processing and neural network acceleration architectures unable to cope with the two opposing execution behaviors at the same time, and cannot meet the acceleration requirements of GNNs. To solve the above problems, acceleration architectures tailored for GNNs continue to emerge. They customize computing hardware units and on-chip storage levels for GNNs, optimize computation and memory access behaviors, and have achieved acceleration effects well. Based on the challenges faced by the GNN acceleration architectures in the design process, this paper systematically analyzes and introduces the overall structure design and the key optimization technologies in this field from computation, on-chip memory access, off-chip memory access respectively. Finally, the future direction of GNN acceleration structure design is prospected from different angles, and it is expected to bring certain inspiration to researchers in this field.

       

    /

    返回文章
    返回