ISSN 1000-1239 CN 11-1777/TP

计算机研究与发展 ›› 2020, Vol. 57 ›› Issue (4): 709-722.doi: 10.7544/issn1000-1239.2020.20190863

所属专题: 2020数据驱动网络专题

• 网络技术 • 上一篇    下一篇



  1. 1(中国科学院计算技术研究所 北京 100190);2(中国科学院大学 北京 100049);3(联想研究院 北京 100085) (
  • 出版日期: 2020-04-01
  • 基金资助: 

DNN Inference Acceleration via Heterogeneous IoT Devices Collaboration

Sun Sheng1,2, Li Xujing1,2, Liu Min1,2, Yang Bo1,2, Guo Xiaobing3   

  1. 1(Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190);2(University of Chinese Academy of Sciences, Beijing 100049);3(Lenovo Research, Beijing 100085)
  • Online: 2020-04-01
  • Supported by: 
    This work was supported by the National Natural Science Foundation of China (61732017, 61872028).

摘要: 深度神经网络(deep neural network, DNN)已经广泛应用于各种智能应用,如图像和视频识别.然而,由于DNN任务计算量大,资源受限的物联网(Internet of things, IoT)设备难以本地单独执行DNN推断任务.现有云协助方法容易受到通信延迟无法预测和远程服务器性能不稳定等因素的影响.一种非常有前景的方法是利用IoT设备协作实现分布式、可扩展DNN任务推断.然而,现有工作仅研究IoT设备同构情况下的静态拆分策略.因此,迫切需要研究如何在能力异构且资源受限的IoT设备间自适应地拆分DNN任务,协作执行任务推断.上述研究问题面临2个重要挑战:1)DNN任务多层推断延迟难以准确预测;2)难以在异构动态的多设备环境中实时智能调整协作推断策略.为此,首先提出细粒度可解释的多层延迟预测模型.进一步,利用进化增强学习(evolutionary reinforcement learning, ERL)自适应确定DNN推断任务的近似最优拆分策略.实验结果表明:该方法能够在异构动态环境中实现显著DNN推断加速.

关键词: 深度神经网络推断加速, 异构设备协作, 进化增强学习, 多层预测模型, 拆分策略

Abstract: Deep neural networks (DNNs) have been intensively deployed in a variety of intelligent applications (e.g., image and video recognition). Nevertheless, due to DNNs’ heavy computation burden, resource-constrained IoT devices are unsuitable to locally execute DNN inference tasks. Existing cloud-assisted approaches are severely affected by unpredictable communication latency and unstable performance of remote servers. As a countermeasure, it is a promising paradigm to leverage collaborative IoT devices to achieve distributed and scalable DNN inference. However, existing works only consider homogeneous IoT devices with static partition. Thus, there is an urgent need for a novel framework to adaptively partition DNN tasks and orchestrate distributed inference among heterogeneous resource-constrained IoT devices. There are two main challenges in this framework. First, it is difficult to accurately profile the DNNs’ multi-layer inference latency. Second, it is difficult to learn the collaborative inference strategy adaptively and in real-time in the heterogeneous environments. To this end, we first propose an interpretable multi-layer prediction model to abstract complex layer parameters. Furthermore, we leverage the evolutionary reinforcement learning (ERL) to adaptively determine the near-optimal partitioning strategy for DNN inference tasks. Real-world experiments based on Raspberry Pi are implemented, showing that our proposed method can significantly accelerate the inference speed in dynamic and heterogeneous environments.

Key words: DNN inference acceleration, heterogeneous device collaboration, evolutionary reinforce-ment learning, multi-layer prediction model, partitioning strategy