高级检索

    面向边缘计算的嵌入式FPGA卷积神经网络构建方法

    Convolutional Neural Network Construction Method for Embedded FPGAs Oriented Edge Computing

    • 摘要: 当前,高计算消耗的应用和服务逐渐从集中式云计算中心向网络边缘的嵌入式环境迁移,FPGA因其灵活性和高能效特性,使其在边缘计算的嵌入式系统中得到广泛的应用.传统的FPGA卷积神经网络构造方法存在设计周期长和优化空间小等缺点,无法有效探索硬件加速器的设计空间,在网络边缘的的嵌入式环境下尤为明显.针对该问题,提出一种面向边缘计算的嵌入式FPGA平台卷积神经网络通用的构建方法.通过设计卷积神经网络函数中的网络层间可复用的加速器核心,以少量硬件资源实现性能优化的卷积神经网络硬件;通过拓展设计、缓存优化及数据流优化等技术,实现HLS设计优化;利用该方法在嵌入式FPGA平台上构建相应卷积神经网络,实验结果表明:优化后的网络模型在与Xeon E5-1620 CPU和GTX Titan GPU相比时,在功耗与性能方面具有一定优势,适合应用于边缘计算环境中.

       

      Abstract: At present, applications and services with high computational consumption migrate gradually from centralized cloud computing center to embedded environment in the network edge. FPGA is widely used in the embedded systems under edge computing because of its flexibility and high efficiency. The conventional FPGA based convolutional neural network construction method has shortcomings, such as long design cycle and small optimization space, which leads to an ineffective exploration of the design space of targeted hardware accelerator, especially in network edge embedded environment. In order to overcome these issues, a high level synthesis based general method for convolutional neural network construction on embedded FPGA oriented edge computing is proposed. The highly reusable accelerator function is designed to construct the optimized convolutional neural network with a lower hardware resource consumption. Scalable design methodology, memory optimization and data flow enhancement are implemented on the accelerator core with HLS design strategy. The convolutional neural network is built on embedded FPGA platforms. The results show the advantage of performance and power when compared with Xeon E5-1620 CPU and GTX K80 GPU, and suitable for edge computing environment.

       

    /

    返回文章
    返回