高级检索

    面向异构IoT设备协作的DNN推断加速研究

    DNN Inference Acceleration via Heterogeneous IoT Devices Collaboration

    • 摘要: 深度神经网络(deep neural network, DNN)已经广泛应用于各种智能应用,如图像和视频识别.然而,由于DNN任务计算量大,资源受限的物联网(Internet of things, IoT)设备难以本地单独执行DNN推断任务.现有云协助方法容易受到通信延迟无法预测和远程服务器性能不稳定等因素的影响.一种非常有前景的方法是利用IoT设备协作实现分布式、可扩展DNN任务推断.然而,现有工作仅研究IoT设备同构情况下的静态拆分策略.因此,迫切需要研究如何在能力异构且资源受限的IoT设备间自适应地拆分DNN任务,协作执行任务推断.上述研究问题面临2个重要挑战:1)DNN任务多层推断延迟难以准确预测;2)难以在异构动态的多设备环境中实时智能调整协作推断策略.为此,首先提出细粒度可解释的多层延迟预测模型.进一步,利用进化增强学习(evolutionary reinforcement learning, ERL)自适应确定DNN推断任务的近似最优拆分策略.实验结果表明:该方法能够在异构动态环境中实现显著DNN推断加速.

       

      Abstract: Deep neural networks (DNNs) have been intensively deployed in a variety of intelligent applications (e.g., image and video recognition). Nevertheless, due to DNNs’ heavy computation burden, resource-constrained IoT devices are unsuitable to locally execute DNN inference tasks. Existing cloud-assisted approaches are severely affected by unpredictable communication latency and unstable performance of remote servers. As a countermeasure, it is a promising paradigm to leverage collaborative IoT devices to achieve distributed and scalable DNN inference. However, existing works only consider homogeneous IoT devices with static partition. Thus, there is an urgent need for a novel framework to adaptively partition DNN tasks and orchestrate distributed inference among heterogeneous resource-constrained IoT devices. There are two main challenges in this framework. First, it is difficult to accurately profile the DNNs’ multi-layer inference latency. Second, it is difficult to learn the collaborative inference strategy adaptively and in real-time in the heterogeneous environments. To this end, we first propose an interpretable multi-layer prediction model to abstract complex layer parameters. Furthermore, we leverage the evolutionary reinforcement learning (ERL) to adaptively determine the near-optimal partitioning strategy for DNN inference tasks. Real-world experiments based on Raspberry Pi are implemented, showing that our proposed method can significantly accelerate the inference speed in dynamic and heterogeneous environments.

       

    /

    返回文章
    返回