高级检索

    面向数据流结构的指令内访存冲突优化研究

    Optimum Research on Inner-Inst Memory Access Conflict for Dataflow Architecture

    • 摘要: 神经网络等人工智能应用的迅速兴起给传统处理器的设计带来了巨大的挑战,粗粒度数据流架构因具有高指令并发和高通用性的特点成为研究热点.然而,由于粗粒度数据流结构处理单元采用随机访问存储器作为存储结构,加之神经网络中大部分运算数据具有密集型特点,造成大量的指令内操作数访存冲突.通过分析典型神经网络的访存行为,发现此类应用存在指令内操作数冲突,会引起计算部件利用率的降低.基于此分析,提出了灵活的数据冗余策略.在编译指令阶段,为指令内有访存冲突的操作数申请数据冗余空间,降低指令内操作数访存延迟.实验以典型的神经网络LeNet,AlexNet为基准测试程序.采用灵活的数据冗余策略之后,能耗比相对于Round-Robin和ReHash的无数据冗余策略分别提高了30.21%和12.37%,相比于2套全数据冗余策略能耗比提高了27.95%.

       

      Abstract: The rapid development of artificial intelligence application, such as neural network, image recognition and test recognition, brings huge challenges to traditional processors. Coarse-grained dataflow architectures become hotspot for AI application because it possesses the characteristic of high instruction-level parallelism. At the same time, it remains broadly applicable and adaptable. However, with processing elements of coarse-dataflow adapt random access memory as memory, combined with the property of intensive memory requirement of neural networks, there are lots of memory access conflicts in inner-inst. After analyzing the memory access behavior of AI applications, it is found that there are a large number of inner-inst memory access conflicts which greatly degrade the utilization of computing units. Based on this observation, in dataflow processors, a flexible data redundancy strategy (FRS) for inner-inst memory access conflict is proposed to allocate multi-storage for operand access requests which induce conflicts in inner-inst during compile stage. By using FRS, the number of conflicts in the RAM is effectively degraded. We use typical AI application benchmarks in the experiments, such as LeNet, AlexNet. The experimental results show that FRS improves power efficiency by 30.21% and 12.37% compared with Round-Robin none-data redundancy strategy and Re-Hash none-data redundancy strategy, and by 27.95% compared with 2 multi-data redundancy strategy.

       

    /

    返回文章
    返回